Offline, Open-Source RAG
Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
You have to have Ollama Server running :)
curl -fsSL https://ollama.com/install.sh | sh
- git clone https://github.com/HyperUpscale/local-rag-ollama-github-url.git
- sudo apt update && sudo apt install python3-pip python3.10-venv -y
- cd local-rag-ollama-github-url && source bin/activate
- pip install -r requirements.txt
- streamlit run main.py
Features:
- Offline Embeddings & LLMs Support (No OpenAI!)
- Support for Multiple Sources
- Local Files
- GitHub Repos
- Websites
- Streaming Responses
- Conversational Memory
- Chat Export
Learn More: