/llama.cpp

Quickstart guide for running llama.cpp

Primary LanguageCMIT LicenseMIT

Build:

git clone https://github.com/star-bits/llama.cpp
cd llama.cpp

make

Models:

ls ../../weights/llama.cpp/models 
alpaca-7b-ggml-q4_0-lora-merged		vicuna-13b
gpt4all-7b-ggml-q4_0-lora-merged	vicuna-7b
llama-7b-ggml-q4_0

Help:

./main -h

System prompt:

./prompts/context-setting.txt

A conversation with an AI that provides helpful, polite, and concise answers.

User: Hey, you there?
AI: At your service, sir.
User:

./pyllamacpp.ipynb

Run:

./main -m ../../weights/llama.cpp/models/vicuna-13b/ggml-vic13b-q4_0.bin -n -1 --repeat_penalty 1.0 --color -i -f prompts/context-setting.txt -r "User:"