Note that we are not able to provide the trained models, so actual ML models are mocked out (we just return random messages for the chatbot).
Install dependencies with pip install --use-deprecated=legacy-resolver -U -r requirements.txt
.
Run python train.py
To run many tune trials: python train.py --num-trials <num_trials>
If you use MacOS and see error "Resource stopwords not found.", use this link (gunthercox/ChatterBot#930 (comment)) to download the required data from nltk
Start Ray and run the Ray Serve deploy script:
ray start --head
python chatbot.py
Navigate to localhost:8000
to see the UI.
The frontend code is hosted by the FastAPI app in Ray Serve. The code is in frontend/
.
The Grafana dashboard JSON and file for Locust load testing are in util/
.
CI is not configured to run on this repository, but the code is in .github/workflows/deploy.yml.