Object detection only tracks one object of a class at a time, even when multiple objects are in the view
Closed this issue · 13 comments
Preliminary Checks
- This issue is not a duplicate. Before opening a new issue, please search existing issues.
- This issue is not a question, feature request, or anything other than a bug report directly related to this project.
Description
I am attempting to get up to speed with the object detection with the ZED2i camera. I am using the example display which I am launching with roslaunch zed_display_rviz display_zed2i.launch
.
Initially, it seemed to be working perfectly, the bounding boxes over the detected objects are displayed and track the object when the camera moves around.
However, I wanted to test this with multiple people in the camera view and it appears to only be able to track one person at a time. When there are two people in the view, only one person appears to be detected. However when the original detected person leaves the scene, and the undetected person stays in the same place they are now able to be detected.
Steps to Reproduce
- roslaunch zed_display_rviz display_zed2i.launch
...
Expected Result
Multiple people within camera view are detected & tracked
Actual Result
Detection and tracking only occurs for one person out of the multiple in the scene.
ZED Camera model
ZED2i
Environment
Ubuntu 20.04
NVIDIA A2000
ZED SDK 4.0
ROS Noetic
Anything else?
No response
@jamesheatonrdm is the detection working with ZED examples outside ROS?
Did you try to raise the detection confidence value?
Can you post pictures showing the problem?
I have ran the 'tutorial 6 - object detection' python code, and the output to the command line shows that each person in the scene is detected, however I am unsure how to visualise this.
Raising the confidence value has no effect.
@Myzhar I have managed to build and run the concurrent detections sample.
In this case all of the objects are detected even when multiple poeple are within the scene.
Are you using the same values for the parameters in the ROS wrapper?
confidence, AI model, etc
I am using whatever the defaults are between the two (I would presume they are the same?!). I have not changed any values in any parts of the code except for changing the od_enabled parameter to true
They are not the same. ROS is a thing and has its own parameters:
https://github.com/stereolabs/zed-ros-wrapper/blob/master/zed_wrapper/params/common.yaml#L72-L84
I am familiar with ROS.
The Concurrent Detections sample does not appear to have any parameters that I can pass via the command line.
Looking at the main.cpp file, I see that object detection model is MULTI_CLASS_BOX_MEDIUM, which doesn't appear to be an option in the common.yaml file for the zed wrapper (options are MULTI_CLASS_BOX or MULTI_CLASS_BOX_ACCURATE)
It appears that the model MULTI_CLASS_BOX_ACCURATE is available to both programs, so I have set them to both use it.
I have confirmed that confidence is the same between the two (50).
Range was slightly higher (20 vs 15) in the concurrent detections program, so I have set the ROS wrapper to use the same, however setting all 3 of these to the same values does not have any effect.
You are too close to the camera and most of the camera view is desk surface
@Myzhar here is another example, this time with 4 people in the view. Only one is detected
You are too close to the camera and most of the camera view is desk surface
But you still see that the detection works when it is just me in the frame? It is clear that the addition of the second person to the frame is when the issue occurs
The issue must be coming from the way these detections are rendered in RVIZ, as when echoing /zed2i/zed_node/obj_det/objects it shows multiple people detected per frame
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days