- Python 3.9
- Ollama for running local models
- cd to the project folder
- run
pipenv install
to install all dependencies from the Pipfile- If you don't have pipenv install, run
pip install pipenv
- If you don't have pipenv install, run
- Pull your LLM from Ollama, e.g.
ollama run mistral
- First, put your Markdown file under
data/raw
and create your vector database using src/create_vectordb.py - For simple retrieval, edit the desired search question under src/retrieval.py and run it in your IDE.
- To pass those documents into the context window of an LLM and generate an answer, edit and run src/chatbot.py