title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | license |
---|---|---|---|---|---|---|---|---|
Ai Chatbot w/ Langchain, Ollama, and Streamlit |
📊 |
indigo |
gray |
streamlit |
1.28.0 |
main.py |
false |
mit |
Run your own AI Chatbot locally on a GPU or even a CPU.
To make that possible, we use the Mistral 7b model.
We will run use an LLM inference engine called Ollama to run our LLM and to serve
an inference api endpoint and have LangChain connect to it instead of running the LLM directly.
This AI chatbot will allow you to define its personality and respond to the questions accordingly.
There is no chat memory in this iteration, so you won't be able to ask follow-up questions.
The chatbot will essentially behave like a Question/Answer bot.
- Install ollama
- Install langchain
- Install streamlit
- Run streamlit
The setup assumes you have python
already installed and venv
module available.
- Install
ollama
from ollama.ai. - Start
ollama
:
ollama serve
- Download
mistral
llm usingollama
:
ollama pull mistral
- Download the code or clone the repository.
- Inside the root folder of the repository, initialize a python virtual environment:
python -m venv .venv
- Activate the python environment:
source .venv/bin/activate
- Install required packages (
langchain
andstreamlit
):
pip install -r requirements.txt
- Start
streamlit
:
streamlit run main.py