These tutorials will help you to build, step by step, a RAG Chatbot using Large Languages Models (LLM) and Lanch Chain. Here the list of lessons:
- Lesson 1: Hello LLM
- Lesson 2: Hello LLM with providers configuration
- Lesson 3: a simple Chatbot
- Lesson 4: a simple Chatbot with Context
- Lesson 5: just a few improvements
- Lesson 6: an Object Oriented ChatBOT with Lang Chain
- Lesson 7: Configure generation parameters
- Lesson 8: Memory Management during the conversation
- Lesson 9: DataWaeve CLI and Qdrant
- Lesson 10: RAG implementation
- Lesson 11: Add UI interface with Streamlit
To run the tutorials you need Python 3 installed on your machine. On Mac you can simply type:
brew install python3
To run the tutorials with Ollama provider you need to install ollama cli:
brew install ollama
You can start the ollama server with the command:
ollama serve
In another terminal:
- you should download the LLM llama3 model in the
~/.ollama
folder:
ollama pull llama3
- you can list the downloaded model using the commands:
ollama list
To run the tutorials you need to clone the langchain-tutorials
repository in a <workspace>
folder:
cd <workspace>
git clone https://github.com/sasadangelo/langchain-tutorials
cd langchain-tutorials
To run the tutorials do the following steps:
- Create and activate the Python virtual environment:
python3 -m venv venv
source venv/bin/activate
- Install the dependencies
pip3 install -r requirements.txt
- Run the tutorial following the instructions in each lesson