-
Run
pip3 install rasa==2.0.0rc2
. NOTE: This install a lot of dependencies, you may want to use virtualenv to keep your local dependencies organized -
Run
rasa train
. This should train the Rasa Assistant, with 2 initial steps: (1) NLU model training and (2) Core model training -
Once training is complete, you will receive a notification that specifies the directory and filename of the model that was trained:
Your Rasa model is trained and saved at '/Users/projects/personal/rasa-project/models/20200927-222831.tar.gz'
NOTE: The model name seems to have been derived from the data and time when rasa train
was called. The initial training took 26 mins. for me as it included the download time for the base model: https://github.com/PolyAI-LDN/polyai-models/releases/download/v1.0/model.tar.gz
- Rasa will ask you whether you'd like to test the trained rasa assistant on your first try.
-
I tried running traiing again after updating the
favorite_color
intent and it was able to train it in less than 3 mins. -
Tried running again without any changes and I got:
Nothing changed. You can use the old model stored at '/Users/projects/personal/rasa-project/models/20200927-230117.tar.gz'.
You can run rasa shell
to test the latest trained model.
Something very unique with RASA is the Automated Testing. Run rasa train
to try.
You can manually run the rasa server by running rasa run --enable-api
. See: https://rasa.com/docs/rasa/http-api
Rasa should give you the endpoint you should connect to, e.g.:
Starting Rasa server on http://localhost:5005
Here's the link to the API Documentation: https://rasa.com/docs/rasa/pages/http-api
https://rasa.com/docs/rasa/connectors/your-own-website/#restinput
In a nutshell, you can start conversing with the bot through:
http://localhost:5005/webhooks/rest/webhook
After a few debugging here and there, I encountered not being able to run rasa train
because of a missing model that is required. Upon further research, I found out that you can download it here:
-
Issue with training model for the first time (./models don't have any model content). For some reason, rasa is not able to store models in the
./models
folder despite having to successfully train. This doesn't happen when the./models
folder already contains a model to start.- We might need to train a model and have the stored indefinitely in our
./models
folder as our base?
- We might need to train a model and have the stored indefinitely in our
-
[RESOLVED] Currently having issues with this:
rasa.exceptions.ModelNotFound: No NLU or Core data for unpacked model at: '/var/folders/4m/6prcxkh10s71tzn6frvdy7lm0000gn/T/tmpk5ekiq3q'
It seems like, it's looking for core
and nlu
default folders which can't be found when simply running rasa train --fixed-model-name ./models/model.tar.gz
.
- Fix my local environment Python Installment
- Install RASA Open Source Locally
- Run initial training with default NLU Training Data
- Run
rasa train
to generate a model, a.k.amodel 1
- Run another training with another data, a.k.a
model 2
- GOAL - generate two different models with two different training data
- Run
- Test if it works using command line
- Run
rasa shell
- Chat with the bot
- Run
- Test API functionality by starting the Rasa Server
- Run
rasa run
- Load
model 1
of the models using the API - Test chat
- Load
model 2
- Test Chat
- Run
Using the installation process for Rasa Open Source, build a dockerized environment for working with rasa.
- Build the Dockerfile with Python as base
- Install Rasa libraries
- Run initial training with
rasa train
via Dockerfile - Set Entrypoint as
rasa
- Run CMD
['run', '--enable-api']
- this should start the server - Test initially trained model
- Question: Do I need to have an initially trained model as my base? or is it okay to always start from scratch using
rasa train
? 🤔 - Test training via API - train 2 - 3 different models
- Test load/unloading of models
- Question: Is it possible to create different sets of testing materials?
- Response when running
rasa run
without trained models
$ rasa run
No model found. You have three options to provide a model:
1. Configure a model server in the endpoint configuration and provide the configuration via '--endpoints'.
2. Specify a remote storage via '--remote-storage' to load the model from.
3. Train a model before running the server using `rasa train` and use '--model' to provide the model path.
For more information check https://rasa.com/docs/rasa/model-storage.
- Running with
rasa run
with already trained model logs:
Step 11/14 : RUN echo -e "no\n" | rasa train
---> Running in fa8a1582ed4d
2020-09-30 07:48:47 WARNING rasa.shared.utils.validation - Training data file data/rules.yml doesn't have a 'version' key. Rasa Open Source will read the file as a version '2.0' file. See https://rasa.com/docs/rasa.
Nothing changed. You can use the old model stored at '/usr/src/app/models/20200929-141249.tar.gz'.
-
Rasa will always load the server with the latest trained model upon running
docker-compose up
.- I attempted to remove the last model and re-run
docker-compose
and found out that it ran training again. This means that rasa probably keeps track of the last trained model somewhere in cache or some file.
- I attempted to remove the last model and re-run
-
If you delete the models stored in folder
./models
, while a model is already loaded, it doesn't affect the Rasa Server. -
If no model was loaded, you will get something like this:
{
"config": {
"store_entities_as_slots": true
},
"session_config": {
"session_expiration_time": 0,
"carry_over_slots_to_new_session": true
},
"intents": [],
"entities": [],
"slots": {},
"responses": {},
"actions": [],
"forms": {},
"e2e_actions": []
}
-
Can we run model training via API without a previously trained model? **Answer: ** Yes.
-
Can we run
rasa run
without an availablepre-trained
model? Answer: Yes.