awslabs/aws-iot-core-integration-with-nvidia-deepstream

adding deepstream config - peoplenet

Opened this issue · 5 comments

moved from aws-samples/aws-iot-greengrass-deploy-nvidia-deepstream-on-edge#4

given at jetson normally the app is executed with

deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt 

with prior downloading model files with

mkdir -p ../../models/tlt_pretrained_models/peoplenet && \
    wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v1.0/files/resnet34_peoplenet_pruned.etlt \
    -O ../../models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt

I am looking to get usb camera to work with custom deepstream configuration file that loads peoplenet model from deepstream samples for jetson
ref: https://forums.developer.nvidia.com/t/output-tensor-in-facedetectir/166311/9

The tutorial I tried https://aws.amazon.com/blogs/iot/how-to-integrate-nvidia-deepstream-on-jetson-modules-with-aws-iot-core-and-aws-iot-greengrass/ allowed to run test4 deepstream app, also test5 deepstream app , but I would like to take the inputs from the usb camera also use the default trained model - deepstream peoplenet that uses specific model/ config file .
How to customize it?

Moreover, the tutotial I used pointed out :
"Once you see messages coming into AWS IoT Core, there are a lot of options to further process them or store them on the AWS Cloud. One simple example would be to use AWS IoT Rules to push these messages to a customized AWS Lambda function, which parses the messages and puts them in Amazon DynamoDB. You may find the following documents helpful in setting up this IoT rule to storage pipeline:"
Could you provide a more complete example how to do so, please?

@yuxiny2586
if there was a detailed instruction on how to proceed from MQQT / IoT deepstream test4/test5 tutorial to getting the messages saved at AWS it would help getting through the lambda setup etc dramatically

@AndreV84 Some answers:

  • If the USB camera is a standard UVC camera, you can modify the config to use v4l2 source. You would point gstreamer to /dev/video where x is the device number.
  • Re: Model. What customization are you trying to make to the model?
  • If you need help with consuming MQTT messages in the cloud, you can take a look at this AWS IoT Workshop here and here.

@mtalreja-a thank you for following up!

  1. by now it got to work with test4/test4 deepsteam apps with mqqt, following literally every steps from the tutorial
  2. next steps would be to save the messages with lambda, right? For deepstream there is no particular example to follow yet?
  3. I was considering to load the other model/config than the default used in the tutorial:
    I can execute the models locally with
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_facedetectir.txt

so it would require to customize somehow test4/test4 to load differnt config file with different .engine model probably somehow

@AndreV84 Sorry I missed this.

  1. That's awesome.
  2. There are many ways to save the message - depends on your use case. You can take the data to a database like DynamoDB, go into Kinesis to batch the data and take an aggregate action, connect the messages to SNS alerts, or definitely use a Lambda function to take a custom action. Take a look at the actions allowed here: https://docs.aws.amazon.com/iot/latest/developerguide/iot-rule-actions.html
  3. I believe your approach will work. The model efficiency will depend on the operators being used.

Dear @mtalreja-a
Just not to overcomplicate things
I am trying to research if it is possible to implement the homework using the AWS method.
https://github.com/MIDS-scaling-up/v2/tree/master/week03/hw
[it is not for a grade though ]
Does it seem feasible?
Thank you very much!