/local-rag-llamaindex

Local llamaindex RAG to assist researchers quickly navigate research papers

Primary LanguagePython

RAG: Research-assistant

Header

This project aims to help researchers find answers from a set of research papers with the help of a customized RAG pipeline and a powerfull LLM, all offline and free of cost.

For more details, please checkout the blog post about this project.

How it works

Project Architecture

  1. Download some research papers from Arxiv
  2. Use Llamaindex to load, chunk, embed and store these documents to a Qdrant database
  3. FastAPI endpoint that receives a query/question, searches through our documents and find the best matching chunks
  4. Feed these relevant documents into an LLM as a context
  5. Generate an easy to understand answer and return it as an API response alongside citing the sources

Running the project

Starting a Qdrant docker instance

docker run -p 6333:6333 -v ~/qdrant_storage:/qdrant/storage:z qdrant/qdrant

Downloading & Indexing data

python rag/data.py --query "LLM" --max 10 --ingest

Starting Ollama LLM server

Follow this article for more infos on how to run models from hugging face locally with Ollama.

Create model from Modelfile

ollama create research_assistant -f ollama/Modelfile 

Start the model server

ollama run research_assistant

By default, Ollama runs on http://localhost:11434

Starting the api server

uvicorn app:app --reload

Example

Request

Post Request

Response

Response