/local-rag-llama3

LLAMA3 running locally with the help of Ollama

Primary LanguagePython

Local RAG with Ollama and LLAMA3

Setup

  1. Clone the repo
  2. cd directory
  3. python3 -m venv myvenv
  4. source myvenv/bin/activate
  5. pip install -r requirements.txt
  6. Install Ollama (https://ollama.com/download)
  1. run localrag.py

Demo

https://www.youtube.com/shorts/AzEl4ZuTF5Q

What is RAG?

RAG is a way to enhance the capabilities of LLMs by combining their powerful language understanding with targeted retrieval of relevant information from external sources often with using embeddings in vector databases, leading to more accurate, trustworthy, and versatile AI-powered applications