PyLLaMACpp
Official supported Python bindings for llama.cpp + gpt4all
For those who don't know, llama.cpp
is a port of Facebook's LLaMA model in pure C/C++:
- Without dependencies
- Apple silicon first-class citizen - optimized via ARM NEON
- AVX2 support for x86 architectures
- Mixed F16 / F32 precision
- 4-bit quantization support
- Runs on the CPU
Table of contents
Installation
- The easy way is to use the prebuilt wheels
pip install pyllamacpp
However, the compilation process of llama.cpp
is taking into account the architecture of the target CPU
,
so you might need to build it from source:
git clone --recursive https://github.com/nomic-ai/pyllamacpp && cd pyllamacpp
pip install .
Usage
A simple Pythonic
API is built on top of llama.cpp
C/C++ functions. You can call it from Python as follows:
from pyllamacpp.model import Model
def new_text_callback(text: str):
print(text, end="", flush=True)
model = Model(ggml_model='./models/gpt4all-model.bin', n_ctx=512)
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback, n_threads=8)
If you don't want to use the callback
, you can get the results from the generate
method once the inference is finished:
generated_text = model.generate("Once upon a time, ", n_predict=55)
print(generated_text)
- You can pass any
llama context
parameter as a keyword argument to theModel
class - You can pass any
gpt
parameter as a keyword argument to thegenerarte
method - You can always refer to the short documentation for more details.
Supported model
GPT4All
Download a GPT4All model from https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/.
The easiest approach is download a file whose name ends in ggml.bin
--older model versions require conversion.
If you have an older model downloaded that you want to convert, in your terminal run:
pyllamacpp-convert-gpt4all path/to/gpt4all_model.bin path/to/llama_tokenizer path/to/gpt4all-converted.bin
FAQs
- Where to find the llama tokenizer? #5
Discussions and contributions
If you find any bug, please open an issue.
If you have any feedback, or you want to share how you are using this project, feel free to use the Discussions and open a new topic.
License
This project is licensed under the same license as llama.cpp (MIT License).