A chatbot backend that provides some services.
Here are some ways to use it:
- From Docker image.
- From source code.
docker pull {IMAGE_URL}
git clone {REPOSITORY_URL}
pip install -r requirements.txt
Please install and run Redis server by yourself.
Currently, only OpenAI
is supported.
Please prepare the OpenAI API key
by yourself and config it in the following .env
file.
If you want to use the RAG model, you need to prepare the vector database.
Currently, only Chroma
databases is supported.
Here are some ways to use it:
- From the database file.
- From the corpus.
Please config VECTOR_DATABASE_PATH
in the following .env
file.
Please config CORPUS_PATH
in the following .env
file.
Then vectorize the corpus:
from src.vector import vectorize_corpus
vectorize_corpus(corpus_path={CORPUS_PATH})
Here is the template:
# OpenAI
OPENAI_API_KEY=
# Corpus
CORPUS_PATH=
# Vector database
VECTOR_DATABASE_PATH=
# Redis
REDIS_URL=
# Server
HOST=
PORT=
Here are some ways to run it:
- From Docker image.
- From source code.
Here is an example of docker-compose.yaml
:
services:
backend:
image: chat-bot-backend:latest
ports:
- "8000:8000"
volumes:
- {VECTOR_DATABASE_PATH}:/app/database
- {.ENV_PATH}:/app/.env
Run script/run.sh
.