This repository contains examples of RAG systems for answering questions about local .pdf files
rag_local_retrieval.py
- Load, embed and store .pdf file in vector DB
- GPT4All local LLM based on
mistral-7b
model - Custom prompt
- RetrievalQA chain
rag_api_hf.py
- Load, embed and store .pdf file in vector DB
- Remote HuggingFace LLM based on
zephyr-7b
model. Access via API - Custom prompt
- RetrievalQA chain
rag_api_openai_chat.py
- Load, embed and store .pdf file in vector DB
- Remote OpenAI LLM based on
GPT3.5-Turbo
model. Access via paid API - Chat memory. Enhanced conversational capabilities compared to chains above
- Custom prompt
- ConversationalRetrievalChain chain