A lion identification service based on lion face and whiskers recognitions.
This application is currently deployed via a blue green methodology using Github Actions. The process is as follows:
- Work on code changes in a feature branch based off of the staging branch
- Submit a merge request to the staging branch
- Run the Deploy Github Action workflow pointed to the staging branch
- Receive approval from project administrators
- Submit a merge request to the master branch
- Determine which environment (blue, green) is active in production
- Run the Deploy Github Action workflow pointed to the inactive environment
- Point the staging and production webapp in Heroku to the inactive environment, making it active
- Run the Destroy Github Action workflow pointed to the new inactive environment
- Clone linc-cv-data.
- Create a
data
folder under linc-cv/linc-cv. - Copy
whisker_model_yolo.h5
fromlinc-cv-data
to linc-cv/linc-cv/data.- The
whisker_model_yolo.h5
model was built by previous developers. Unfortunately, the training code is missing.
- The
- Export the following ENV variables:
- LINC_USERNAME
- LINC_PASSWORD
- Execute the following training commands in linc-cv/linc-cv/main.py:
- python <path_to>/linc-cv/linc_cv/main.py --parse-lion-database
- python <path_to>/linc-cv/linc_cv/main.py --download-cv-images
- python <path_to>/linc-cv/linc_cv/main.py --extract-cv-features
- python <path_to>/linc-cv/linc_cv/main.py --train-cv-classifier
- python <path_to>/linc-cv/linc_cv/main.py --download-whisker-images
- python <path_to>/linc-cv/linc_cv/main.py --train-whisker-classifier
linc-cv uses 3 components: Flower, Celery and Supervisor
- Download Conda
- Run
conda create --name linc-cv python=3.6
- Run
conda activate linc-cv
- Run
pip install -r requirements.txt
- Install redis. Celery uses redis message broker.
- Download models from linc-cv-data repository to
linc_cv/data
- Install Homebrew
- Run
brew install supervisor
- Open
/usr/local/etc/supervisord.conf
with your editor of choice- Scroll to the bottom of the page.
- Replace
files = /usr/local/etc/supervisor.d/*.ini
withfiles = /path/to/linc_cv/tests/supervisord/*.conf
. - You need to replace
/path/to
with your local path tolinc_cv
project.
- Open
celery.conf
andflower.conf
underlinc_cv/tests/supervisord
- Replace
johndoe
forcommand
anduser
variables with your own username. This is the username you use to log in to your machine. - You may need to modify the path for
command
if your conda is not installed in the default location.
- Replace
- Run
sudo /usr/local/opt/supervisor/bin/supervisord -c /usr/local/etc/supervisord.conf --nodaemon
celery-classification.log
,celery-training.log
andflower.log
will be created underlinc_cv/tests
folder.- Now you should be able to navigate to Flower UI - http://localhost:5555/
- Execute the following code snippet to download the pretrained model:
-
> conda activate linc-cv > (linc-cv) python
-
>>> import pretrainedmodels >>> model_name = 'senet154' >>> model = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')
- The pretrained model is saved to
$HOME/.torch
.
-
- Under project directory
linc-cv
, execute the following in terminal:-
> export API_KEY=blah > PYTHONPATH=$(pwd) python linc_cv/web.py
-
- Example of request and response (truncated for brievity) for lion face recognition:
-
curl --location --request POST 'http://192.168.86.137:5000/linc/v1/classify' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' \ --data-raw '{ "type": "cv", "url": "https://raw.githubusercontent.com/linc-lion/linc-cv/master/tests/images/female_lion_face_1.jpeg" }'
-
{ "id": "f9591d42-96e6-4178-9022-cab02cd86b3b", "status": "PENDING", "errors": [] }
-
curl --location --request GET 'http://192.168.86.137:5000//linc/v1/results/f9591d42-96e6-4178-9022-cab02cd86b3b' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json'
-
{ "status": "finished", "predictions": [ { "lion_id": "80", "probability": 0.412 }, { "lion_id": "40", "probability": 0.032 }, { "lion_id": "297", "probability": 0.028 } ] }
- Example of request and response (truncated for brievity) for lion whisker recognition:
-
curl --location --request POST 'http://192.168.86.137:5000/linc/v1/classify' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' \ --data-raw '{ "type": "whisker", "url": "https://raw.githubusercontent.com/linc-lion/linc-cv/master/tests/images/sample_lion_whisker_23.jpg" }'
-
{ "id": "3f6dbfdf-98ea-4d76-92af-e5ff9912546b", "status": "PENDING", "errors": [] }
-
curl --location --request GET 'http://192.168.86.137:5000//linc/v1/results/3f6dbfdf-98ea-4d76-92af-e5ff9912546b' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json'
-
{ "status": "finished", "predictions": [ { "lion_id": "15", "probability": 0.951 }, { "lion_id": "372", "probability": 0.79 }, { "lion_id": "94", "probability": 0.785 } ] }
-