The goal of this project is to be able to recognize objects found on the edge with the Percept DK device and Vision SoM camera using Azure Video Anlyzer (AVA) as the platform. Additionally, video is saved to the cloud with AVA when invoking methods that directly communicate with the edge device for continuous recording (default when using the "Deploy to Azure" button below). By default it sets up for people detection with an open source, prebuilt ML model from this zoo. The detections may be seen, at this time, flowing into the IoT Hub messages from the edge.
This repo is under rapid iterations, is not production ready and will be updating often. Currently it offers:
- Process to deploy Azure Video Analyzer (and Azure resources), plus edge modules, to the Percept DK and initiate cloud recording
- [Optional] Python console app in the
ava_app
folder (for debugging) to be run on dev/local machine that starts and stops video recording to the cloud - [Optional] Deployment manifests in the
deploy/edge
folder to reset the Percept DK to original modules or redeploy the AVA pipeline (for debugging)
Future work:
- Default deployment working with AVA widgets for metadata overlay in the Azure Portal
- Improve quality of video sent to the cloud from the Percept DK (currently somewhat degraded when there is movement)
- A final dashboard making use of the AVA widgets for metadata (bounding box) overlay
- Percept DK (Purchase)
- Azure Subscription - Free trial account
- [Optional - for debugging] Python 3.6+ (preferably an Anaconda release)
- Follow Quickstart: unbox and assemble your Azure Percept DK components and the next steps.
Important: The following "Deploy to Azure" button will provision the Azure resources listed below and you will begin incurring costs associated with your network and Azure resources immediately as this solution faciliates continuous video recording to the cloud. To calculate the potential costs, you may wish to use the pricing calculator before you begin and/or have a plan to test in a single resource group that may be deleted after the testing is over.
After the script finishes you will have the following Azure resources in a new Resource Group in addition to your existing IoT Hub you specified:
- Storage Account
- Azure Video Analyzer
- With an active pipeline for video recording running
- Container Registry
- Managed Identities
IMPORTANT: To be able to redeploy the AVA modules, you should keep the AVA Provisioning Token for your records (this can not be found after redeploying with alternative deployment manifests). After deployment, go to the specified IoT Hub (probably in a different resource group) --> IoT Edge --> your device name --> avaedge Module --> Module Identity Twin --> in "properties" --> "desired" --> copy and save "ProvisioningToken".
View the videos by going to the Azure Portal --> select your AVA resource group --> select Video Analyzer --> go to Videos --> select "sample-http-extension" and wait for the live stream to appear. It may take 1-2 minutes for a live video stream to appear in the Azure Portal under AVA Videos after the deployment is complete.
After deploying the resources in Azure as done above (by using the "Deploy to Azure" button), refer to the next steps as follows (mainly for debuggin and advanced features).
This is to manually control the AVA direct methods (to stop and start recording - CVR).
- If using Anaconda Python (recommended) setup a conda environment, otherwise, use
venv
to create a nuclear environment for this solution. - Install the Python dependencies as follows.
pip install -r requirements.txt
- Follow AVA cloud to device sample console app instructions.
For debugging and understanding futher, to deploy or redeploy from a manifest, get advanced features or get more information on the deployment process, go to the following folder.
Example output from Percept DK through the AVA edge module to IoT Hub:
{
"timestamp": 146434442279126,
"inferences": [
{
"type": "entity",
"entity": {
"tag": {
"value": "person",
"confidence": 0.664062
},
"box": {
"l": 0.244,
"t": 0.321,
"w": 0.676,
"h": 0.343
}
}
}
]
}
-
The module cannot access the path /var/lib/videoanalyzer/AmsApplicationData specified in the 'applicationDataDirectory' desired property. This may occur due to previous deployments of AVA where the application data directory was populated with files. To refresh this directory you will need to stop the iotedge daemon, delete and then recreate the directory as follows.
sudo systemctl stop iotedge
sudo rm -fr /var/lib/videoanalyzer/AmsApplicationData
sudo mkdir /var/lib/videoanalyzer/AmsApplicationData
sudo chown -R 1010:1010 /var/lib/videoanalyzer/
sudo systemctl start iotedge
Note: for newer iotedge daemons you may need to replace
iotedge
command withaziot-edged
.
- Plotly sample app
- Azure Video Analyzer deployment
- AVA Python sample app
- Azure Percept documentation
- Azure Video Analyzer documentation
The Vision SoM on the Percept DK returns json in the format:
{
"NEURAL_NETWORK": [
{
"bbox": [0.404, 0.369, 0.676, 0.984],
"label": "person",
"confidence": "0.984375",
"timestamp": "1626991877400034126"
}
]
}
Here, with the simple http server (simpleserver
module), an advanced feature, we sync it in the correct format for AVA.