- Python requirements (
python -m pip install -r requirements.txt
) - ModelDB (
docker-compose -f docker-compose-all.yaml pull
at this repository's root) - Docker
- Jenkins
- Run ModelDB.
- Run Jenkins (
jenkins-lts --httpPort=7070
, since ModelDB occupies port8080
). - Run Jupyter (
jupyter notebook
).
The ad hoc workflow logs model ingredients to S3 and fetches them, one by one, to deploy them. This can lead to mismatched artifacts if S3 buckets are not carefully managed, and it can be difficult to track what results are associated with what model.
- Run the notebook, logging model ingredients to S3.
- Run the Jenkins pipeline, building a Docker image to serve the model.
- Or run
02-package/run.sh
directly, setting the environment variablesBUCKET
,MODEL_PATH
, andMETADATA_PATH
to point at the model ingredients on S3.
- Or run
- Run
03-predict/run.sh
to serve the model.- Run
03-predict/predict.sh
to make predictions against the model atlocalhost:5000
. - View live metrics from the model in your web browser at
http://localhost:9090/
.
- Run
The ModelDB Versioning workflow leverages our versioning system to snapshot model ingredients together, linking them to experimental results and enabling reproducibility, reverts, and merges of promising ingredients.
- Run the notebooks, logging model ingredients to S3.
- Run the Jenkins pipeline, building a Docker image to serve the model.
- Or run
02-package/run.sh
directly, setting the environment variablesVERTA_HOST
andRUN_ID
to fetch the associated model ingredients from ModelDB.
- Or run
- Run
03-predict/run.sh
to serve the model.- Run
03-predict/predict.sh
to make predictions against the model atlocalhost:5000
.- Try a few German phrases, as well!
[ "Guten Morgen.", # Good morning. "Gute Nacht.", # Good night! "Sie sind sehr freundlich.", # You're very kind! "Da muss ich widersprechen.", # I disagree. "Es ist ein Notfall!", # It's an emergency! "Ich verstehe nicht.", # I don't understand. "Ich bin sauer.", # I'm angry. ]
- Try a few German phrases, as well!
- View live metrics from the model in your web browser at http://localhost:9090/.
- Run