This is a demo project that shows off how to build and serve a convolutional neural net classifier in production.
The code is not production ready by any means, but feel free to play around with it.
Hit up @whatdoggiebot on Telegram.
You can easily run your own classifier model with the same code in a few steps:
- Create an
.env
(see example in.env.sample
), provide a Rollbar API token and a TG bot token. - Put your trained model
.pth
file in app/models and set the model name and pics size on.env
. Don't forget to tweakarch
inserver.py
as well. pip install -r requirements.txt
python app/server.py
should start the bot and printStarting up...
to your terminal.
Please see Telegram docs on how to create bots and what the bots API can do.
This classifier was build with fast.ai v1. They have MOOCs on deep learning. The whole learning process with downloading the dataset took a few hours, but less than a day.
The model runs on CPU for inference in production, you don't need a GPU to run it.
Whatdoggiebot runs on a single Google Cloud VM, deployed as a docker image via Google Cloud Registry.
docker build . -t gcr.io/PROJECT_ID/whatdoggie:latest
docker push gcr.io/PROJECT_ID/whatdoggie:latest
gcloud compute instances reset whatdoggie-vm
The VM is ~$10 monthly, but any free docker hosting will work as a free alternative.
This model was trained on the Stanford dog breeds dataset and works pretty crappy with real world dog pics ;-). If you want to fool around, you can build your own dataset with just google image search, or use a subset of Google Open Images.
Have fun!
Let me know what you built on twitter!