criteria for frameId
jaceyyj opened this issue · 0 comments
I completed deployment for spatial analysis by referencing the following docs.
https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest-NonASE.json
For video URL and polygon setting, I referred to the following link.
https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json
And output stream of JSON messages were generated, which sent to my Azure blob storage. And I was trying to visualize the coordinates of bounding box from outputs on the recorded video without using .debug operation. Output, especially 'detection' part, can be visualized as an video only when frameId is taken at regular time intervals.
Here I got some problems.
-
Looking at the events in JSON, the frameId used for each event was different.
('personDistanceEvent' : 1 to 820,
'personCountEvent' : 1 to 782,
'personZoneDwellTimeEvent' : 1375 to 9066,
'personZoneEnterExitEvent' 1389 to 4285: ,
'personLineEvent' : 1756, 2443,
'cameraCalibrationEvent' : 7784, 8143) -
Even though I set the 'trigger' as 'interval', it was not that periodic. Timestamps were not spaced uniformly, neither was frameId.
This is part of timestamp of personCountEvent.
This is part of frameId of perconCountEvent. Sometimes the interval is 1, sometimes 2, sometimes more.
Here's my question.
- I would like to know if there is any criteria for cutting frames for each event.
- Is there another method to display the detection result on the video without using .debug operation?