This is an example of how to use the llama.cpp-ts
package to interact with language models loaded through llama.cpp
.
Before installation download any gguf model that your machine can handle. For example this one Meta-Llama-3.1-8B-Instruct-Q3_K_S.gguf and put it in /models
folder.
yarn install
yarn start