Interface for interacting with files using langchain. Inspired by ollama examples. It follows the usual structure:
- Load documents
- Chunk them
- Create embeddings and store them into a vector datastore
- Retrieve from the datastore using a LLM
Run Llama 2 locally:
ollama run llama2
Install the required dependencies:
pip install -r requirements.txt
Run the project:
python main.py
Example query:
Query: Can you summarise the content of the file?