NVlabs/Deep_Object_Pose

Question regarding Ground truth data generation for testing

Opened this issue · 1 comments

Hello @TontonTremblay ,

Hope you are doing well. Long time after the last discussion. I have a question regarding the ground truth data generation for testing. I am recently trying to generate some ground truth object pose data using mocap, and camera synchronization. So, I have a data with synched RGB image from zed2 and 6DoF pose of the object relative to camera (generated the relative transform from mocap world to camera and mocap world to object transform). Since I know the object dimension, I can generate the 3d cordinates of the object (3d keypoints: 8 vertices + 1 centroid). Now, when I try to project them to the 2D plane, the 2D keypoints (8 vertices pixel cordinates + 1 centroid pixel coordinate) are somehow not matching to the original object in the 2D image plane. Is there any opensource implementation that you suggest for this mapping to be correct ??

could you share a few examples? It is hard to say what is wrong, but I would think calibration problem or intrinsics.