Playing around with RAG - LLM's exploring certain use cases.
- Open Source LLM model used : Mistral
- Vector Data Store used : Chroma DB
- Embeddings Function : Ollama Nomic-Embed-Text
- Data used : AEM Guide
- pip install langchain-community
- pip install chromadb(If build fails for chromadb-hnswlib, you need to sudo apt-get install libfuse I think it was. lib something something atlease)
- Setup Ollama -> curl -fsSL https://ollama.com/install.sh | sh
- ollama serve
- ollama pull nomic-embed-text
- ollama pull mistral
- Change the path of data in data_loader.py/load_documents()(DATA_PATH)
- Run data_loader.py
- Run query_data.py
- Enter your queries, get your replies
