A ready to use 100% local setup for Cat + Ollama + Embedder + Qdrant
- You need a GPU and tech expertise to run this
- The setup is english language only
- clone the repo:
git clone https://github.com/cheshire-cat-ai/local-cat.git
- cd
cd local-cat
- Build the cat:
docker-compose up
- Pull the desired model from ollama library:
docker exec ollama_cat ollama pull <model_name:tags>
- double command setup
- create the docker compose
- setup core image and volumes
- volumes: static, public, plugins, metadata.json
- connect to Qdrant container
- embedder CPU based, bg-small-en-v1.5
- ollama GPU based
- one command setup
- self-download the LLM (somehow)