A self-hosted, offline, ChatGPT-like chatbot, powered by Llama 2. 100% private, with no data leaving your device.
umbrel.com (we're hiring) »
LlamaGPT.mp4
Running LlamaGPT on an umbrelOS home server is one click. Simply install it from the Umbrel App Store.
You can run LlamaGPT on any x86 or arm64 system. Make sure you have Docker installed.
Then, clone this repo and cd
into it:
git clone https://github.com/getumbrel/llama-gpt.git
cd llama-gpt
You can now run LlamaGPT with any of the following models depending upon your hardware:
Model size | Model used | Minimum RAM required | How to start LlamaGPT |
---|---|---|---|
7B | Nous Hermes Llama 2 7B (GGML q4_0) | 8GB | docker compose up |
13B | Nous Hermes Llama 2 13B (GGML q4_0) | 16GB | docker compose -f docker-compose-13b.yml up |
70B | Meta Llama 2 70B Chat (GGML q4_0) | 48GB | docker compose -f docker-compose-70b.yml up |
Note: On the first run, it may take a while for the model to be downloaded to the /models
directory. You may see lots of output like for a few minutes, which is normal:
llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-13b:8000] not yet available...
After the model has been downloaded and loaded, and the API server is running, you'll see an output like:
llama-gpt-llama-gpt-api-13b-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
You can then access LlamaGPT at http://localhost:3000
.
To stop LlamaGPT, either do Ctrl + C
or run:
docker compose down
First, make sure you have a running Kubernetes cluster and kubectl
is configured to interact with it.
Then, clone this repo and cd
into it.
To deploy to Kubernetes first create a namespace:
kubectl create ns llama
Then apply the manifests under the /deploy/kubernetes
directory with
kubectl apply -k deploy/kubernetes/. -n llama
Expose your service however you would normally do that.
Thanks to llama-cpp-python, a drop-in replacement for OpenAI API is available at http://localhost:3001
. Open http://localhost:3001/docs to see the API documentation.
We've tested LlamaGPT models on the following hardware with the default system prompt, and user prompt: "How does the universe expand?" at temperature 0 to guarantee deterministic results. Generation speed is averaged over the first 10 generations.
Feel free to add your own benchmarks to this table by opening a pull request.
Device | Generation speed |
---|---|
M1 Max MacBook Pro (10 64GB RAM) | 8.2 tokens/sec |
Umbrel Home (16GB RAM) | 2.7 tokens/sec |
Raspberry Pi 4 (8GB RAM) | 0.9 tokens/sec |
Device | Generation speed |
---|---|
M1 Max MacBook Pro (64GB RAM) | 3.7 tokens/sec |
Umbrel Home (16GB RAM) | 1.5 tokens/sec |
Device | Generation speed |
---|---|
M2 Max MacBook Pro (96GB RAM) | 0.69 tokens/sec |
GCP e2-standard-16 vCPU (64 GB RAM) | 1.75 tokens/sec |
We're looking to add more features to LlamaGPT. You can see the roadmap here. The highest priorities are:
- Moving the model out of the Docker image and into a separate volume.
- Add CUDA and Metal support (work in progress).
- Add ability to load custom models.
- Allow users to switch between models.
- Making it easy to run custom models.
If you're a developer who'd like to help with any of these, please open an issue to discuss the best way to tackle the challenge. If you're looking to help but not sure where to begin, check out these issues that have specifically been marked as being friendly to new contributors.
A massive thank you to the following developers and teams for making LlamaGPT possible:
- Mckay Wrigley for building Chatbot UI.
- Georgi Gerganov for implementing llama.cpp.
- Andrei for building the Python bindings for llama.cpp.
- NousResearch for fine-tuning the Llama 2 7B and 13B models.
- Tom Jobbins for quantizing the Llama 2 models.
- Meta for releasing Llama 2 under a permissive license.