robotika/osgar

stereo camera sync

frantisekbrabec opened this issue · 5 comments

When working with a stereo camera, we need to make sure that the two frames we are trying to do calculations on (e.g., disparity map) are in sync, ie taken in the same moment. It doesn't appear that OSGAR publishes anything from the incoming image streams besides the images themselves which makes keeping them in sync on the application level difficult. Recommend extraction and publication of additional fields such as seq and timestamps in support of this.

zwn commented

We also need synchronization of the rgb image and depth for rgbd camera in virtual subt.

As for the synchronization as such, I was planning to do that inside subt/cloudsim2osgar.py. Even if done there, there is still the issue of how to publish the fact that the two outputs are already synchronized. One way to do that would be to create an output containing both - the rgb image and depth. However that is "not that nice" since rgb image is jpeg and depth is numpy array (possibly zipped) and it would involve changing all current clients, including visualization tools.

The original idea was that the osgar timestamp connected to each message would be used for this. However this is not possible for simulation - the way it is implemented now - since the timestamp is walltime and not simtime. I would vote for simtime but that ship has sailed away a long time ago. So what is left is adding simtime timestamp to the image and depth topics and do the synchronization on the application side. I am vary of seq numbers as that is somewhat ros-specific while "the time when the image was actually taken" has its own meaning even outside ros or simulation context.

So my proposal is to add a field to the image and depth topics containing "the time when the image was actually taken". Comments welcome.

zwn commented

Actually another option would be to always publish one of the topics first and the other second. On the application side, one would wait for the second topic to trigger any calculation. Topics in osgar always arrive in the published order so this would be sufficient to keep the streams synchronized. Since that is much less involved change, I think that is what I'll start with in cloudsim2osgar.py.

zwn commented

Another breadcrumb: depth2scan does nontrivial computation (time wise) and by the time it publishes the scan, it might be 0.3s late. It would make sense in this case to have a timestamp attached to the scan, that reflects the time when that scan was relevant. The receiver then could do appropriate transformation to place the scan in a position where it was actually taken.

FYI - it seems that the timestamps, even for a single stereo camera, aren't necessarily guaranteed to be identical which perhaps means that the images aren't guaranteed to be taken in the very same moment. So perhaps capturing an image from one camera, then waiting for the image from the other camera, is the best one can do building stereo vision. Looking for identical sequences or timestamps doesn't seem to work.

zwn commented

Hmm. For a general case, I can see how that can be. But for the specific case of simulated RGBD camera I would expect the color and depth images to have the same timestamp.

On the ROS side there seems to be some kind of stereo_node that restamps single camera with a common timestamp, https://answers.ros.org/question/32912/stereo-camera-syncronization/?answer=32991#post-id-32991