Azure Custom Vision - Local model viewer Sample

License: MIT Twitter: elbruno GitHub: elbruno

Create an object detection model in Azure Custom Vision and run the model locally in a desktop app in minutes.

This is a sample project to use an object detection model, created using Azure Custom Vision, locally in a desktop application. The model is extracted from the "Custom Vision Export to Docker - Linux" feature.

To build an Object Detection project, you can follow this tutorial: Quickstart: Build an object detector with the Custom Vision website

Disclaimer: This demo uses a virtual environment with Python 3.7 as is the required version to run some of the project dependencies.

Prerequisites

  1. 🐍 Anaconda

    Download and install the latest version of Anaconda.

  2. 🪄 Create a local virtual environment

    Open Anaconda PowerShell Prompt. Run the following commands to create a virtual environment named [demo]

    conda create -n demo python=3.7
    conda activate demo

    Once the virtual env is activated the PowerShell window must show the (demo) env.

  3. ▶️ Install project dependencies:

    Run the following commands to install the required dependencies.

    # install latest version of OpenCV
    pip install opencv-python
    # Install tensorflow and general dependencies
    pip install --no-cache-dir numpy~=1.17.5 tensorflow~=2.0.2 flask~=2.1.2 pillow~=7.2.0 protobuf~=3.20.0
    # Install image processing library
    pip install --no-cache-dir mscviplib

    The environment is ready to be used.

  4. ✅ Check environment state:

    The file 00CheckEnv.py will test if all the requirements are sucessfully installed.

    Run the check environment file with the command.

    python .\00CheckEnv.py

    The output should be similar to this one:

    TensorFlow: 2.0.4
    OpenCV: 4.8.0

Download the Custom Vision model

  1. Once you have you model trained, export and download a Docker Linux version of the model.

    Export model to Docker Linux

  2. Extract the model and, from the app folder, copy the following files to the src folder of this repository.

    • labels.txt
    • model.pb
    • predict.py

Select the right Camera

  1. The file .\src\05CameraTest.py will allow us to identify the right camera to use. Edit the file and change the number in the line [cap = cv2.VideoCapture(0)] until the right camera is in use.

    import cv2
    import time
    
    # access to the camera, change the index to use the right camera
    cap = cv2.VideoCapture(0)
  2. Navigate to the src folder and run the file 05CameraTest.py with the command.

    python .\05CameraTest.py

Run Locally

  1. The file .\src\10CameraApp.py will use the camera and the exported model to detect object and display the objects in a window. Update the file to use the right.

  2. Navigate to the src folder and run the app with the command.

    python .\10CameraApp.py
  3. Once the app is running, you can press the following keys to enable / disable some features.

    • Press D to enable or disable the detection
    • Press L to show or hide the labels
    • Press F to show or hide the FPS
    • Press Q to exit
  4. This is an example of the app running.

    Detecting Captain America Shield and Cancer Sign

Author

👤 Bruno Capuano

🤝 Contributing

Contributions, issues and feature requests are welcome!

Feel free to check issues page.

Show your support

Give a ⭐️ if this project helped you!

📝 License

Copyright © 2023 Bruno Capuano.

This project is MIT licensed.