j96w/DenseFusion

Issue with comparing gt to predicted gt

Closed this issue · 3 comments

Thank you again for sharing your code!
I am training your network with synthetic generated data. Therefore I do the following - I have a fix camera looking at a scene where I generate objects with a random pose. Now I write data in the form of the linemod dataset but instead of the camera rotation and translation as "cam_R_m2c" and "cam_t_m2c" I write down the translation and rotation of the regarding object.

The network now is training and I am getting an average distance of around 0.0105252 (whats ok I think). But when I print out the predicted rotation and translation I get totally wrong values compared to the groundtrouth of my objects (see screanshot).
I think this is because I give the network the rotation and translation of the objects and not the camera. Do I need to change my data generation or is there a simple calulation step at the end so I get the right prediction of the rotation and translation? I hope thats not a stupid question ^^
Thanks in advance!
printout_gt

j96w commented

Hi, it would be better if you could spend some time to find a more academic answer from other resources. Basically, you need to make sure that the GT pose is corresponding to the camera frame.

Hey @marmas92 ,
I hope you still read that. I have kinda the same issues as you had. My predicted translation is close to the gt-translation, but the predicted rotation is totaly off. Were you able to fix that problem and can you maybe help me?

Hey @marmas92 ,
can you please help me? I am really stuck at this problem for a long time now. Would be really thankful.
Best regards Ixion