LocalAI is a straightforward, drop-in replacement API compatible with OpenAI for local CPU inferencing, based on llama.cpp, gpt4all and ggml, including support GPT4ALL-J which is licensed under Apache 2.0.
- OpenAI compatible API
- Supports multiple-models
- Once loaded the first time, it keep models loaded in memory for faster inference
- Support for prompt templates
- Doesn't shell-out, but uses C bindings for a faster inference and better performance. Uses go-llama.cpp and go-gpt4all-j.cpp.
LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by mudler at the SpectroCloud OSS Office.
-
Follow @LocalAI_API on twitter.
-
Reddit post about LocalAI.
-
Hacker news post - help us out by voting if you like this project.
-
Tutorial to use k8sgpt with LocalAI - excellent usecase for localAI, using AI to analyse Kubernetes clusters.
It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.
Tested with:
- Vicuna
- Alpaca
- GPT4ALL
- GPT4ALL-J
- Koala
- cerebras-GPT with ggml
It should also be compatible with StableLM and GPTNeoX ggml models (untested)
Note: You might need to convert older models to the new format, see here for instance to run gpt4all
.
LocalAI
comes by default as a container image. You can check out all the available images with corresponding tags here.
The easiest way to run LocalAI is by using docker-compose
:
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# copy your models to models/
cp your-model.bin models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
To build locally, run make build
(see below).
To see other examples on how to integrate with other projects for instance chatbot-ui, see: examples.
The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.
The below instruction describes a task. Write a response that appropriately completes the request.
### Instruction:
{{.Input}}
### Response:
See the prompt-templates directory in this repository for templates for some of the most popular models.
Currently LocalAI comes as container images and can be used with docker or a containre engine of choice.
LocalAI can be installed inside Kubernetes with helm.
Install the chart with `.Values.deployment.volumes.enabled == false` and `.Values.dataVolume.enabled == false`.
-
Advanced, two-phase deployment to provision the models directory using a DataVolume. Requires Containerized Data Importer CDI to be pre-installed in your cluster.
First, install the chart with
.Values.deployment.volumes.enabled == false
and.Values.dataVolume.enabled == true
:helm install local-ai charts/local-ai -n local-ai --create-namespace
Wait for CDI to create an importer Pod for the DataVolume and for the importer pod to finish provisioning the model archive inside the PV.
Once the PV is provisioned and the importer Pod removed, set
.Values.deployment.volumes.enabled == true
and.Values.dataVolume.enabled == false
and upgrade the chart:helm upgrade local-ai -n local-ai charts/local-ai
This will update the local-ai Deployment to mount the PV that was provisioned by the DataVolume.
LocalAI
provides an API for running text generation as a service, that follows the OpenAI reference and can be used as a drop-in. The models once loaded the first time will be kept in memory.
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4
You should see:
┌───────────────────────────────────────────────────┐
│ Fiber v2.42.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............. 1 Processes ........... 1 │
│ Prefork ....... Disabled PID ................. 1 │
└───────────────────────────────────────────────────┘
You can control the API server options with command line arguments:
local-api --models-path <model_path> [--address <address>] [--threads <num_threads>]
The API takes takes the following parameters:
Parameter | Environment Variable | Default Value | Description |
---|---|---|---|
models-path | MODELS_PATH | The path where you have models (ending with .bin ). |
|
threads | THREADS | Number of Physical cores | The number of threads to use for text generation. |
address | ADDRESS | :8080 | The address and port to listen on. |
context-size | CONTEXT_SIZE | 512 | Default token context size. |
debug | DEBUG | false | Enable debug mode. |
config-file | CONFIG_FILE | empty | Path to a LocalAI config file. |
Once the server is running, you can start making requests to it using HTTP, using the OpenAI API.
You can check out the OpenAI API reference.
Following the list of endpoints/parameters supported.
Note:
- You can also specify the model as part of the OpenAI token.
- If only one model is available, the API will use it for all the requests.
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
To generate a completion, you can send a POST request to the /v1/completions
endpoint with the instruction as per the request body:
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
curl http://localhost:8080/v1/models
LocalAI can be configured to serve user-defined models with a set of default parameters and templates.
For instance, a configuration file (gpt-3.5-turbo.yaml
) can be declaring the "gpt-3.5-turbo" model but backed by the "testmodel" model file:
name: gpt-3.5-turbo
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
Specifying a config-file
via CLI allows to declare models in a single file as a list, for instance:
- name: list1
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
- name: list2
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
See also chatbot-ui as an example on how to use config files.
It should work, however you need to make sure you give enough resources to the container. See mudler#2
Pre-built images might fit well for most of the modern hardware, however you can and might need to build the images manually.
In order to build the LocalAI
container image locally you can use docker
:
# build the image
docker build -t LocalAI .
docker run LocalAI
Or build the binary with make
:
make build
Here are answers to some of the most common questions.
Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.
LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.
Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!
Not currently, as ggml doesn't support GPUs yet: ggerganov/llama.cpp#915.
AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!
Feel free to open up a PR to get your project listed!
- https://medium.com/@tyler_97636/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
- https://kairos.io/docs/examples/localai/
- Mimic OpenAI API (mudler#10)
- Binary releases (mudler#6)
- Upstream our golang bindings to llama.cpp (ggerganov/llama.cpp#351) and gpt4all
- Multi-model support
- Have a webUI!
- Allow configuration of defaults for models.
- Enable automatic downloading of models from a curated gallery, with only free-licensed models, directly from the webui.
LocalAI is a community-driven project. It was initially created by mudler at the SpectroCloud OSS Office.
MIT
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp for the light model version (this is compatible and tested only with that checkpoint model!)