llama.cpp: use local models with chatcraft
Contents
I needed a way to do some programming while offline. These days I feel very unproductive without https://chatcraft.org (the best chat UI for programming) and a good LLM to chat with about coding.
Chatcraft needed a few small fixes to enable llama.cpp support. Here’s how to run models with llama.cpp with chatcraft.org without internet:
Instructions
Install and run llama.cpp. Follow https://github.com/ggerganov/llama.cpp instructions for your platform.
For mac:
|
|
Setup local chatcraft dev env by following the instructions in the chatcraft repo
|
|
^ will output a development url like http://localhost:5173/
, open it.
Go to chatcraft settings and add http://localhost:8080/v1
to api providers. Enter a dummy api key.