Pre-requisites :
- Create virtual environment (I'm using venv)
- Run
pip install -r requirements.txt
- Download and install Ollama
Ollama
After installing, open Ollama and run command ollama run llama3.2
on your terminal to download the pretrained LLM model. In this project, I'm using llama3.2-3B
because it's lightweight and has good performance.
I'm using Streamlit and Docker to deploy the chatbot.
docker build -t ecommerce-chatbot-app .
docker run -d --restart always --gpus all --name ecommerce-chatbot-app -p 8501:8501 ecommerce-chatbot-app
To stop and delete the container run
docker stop ecommerce-chatbot-app && docker rm ecommerce-chatbot-app
Retrieved data from API