philgras/video-head-tracker

Hi, access to the transformation matrix

Closed this issue · 3 comments

Hi, many thanks for sharing this tool !
I saw in your paper, you evaluate the NerFace code on your own dataset. Could you please give more details about how you get the transformation matrix and intrinsic camera matrix in their Json file with this tracker ? I tried to use this tracker on their dataset, but the network does not converge. This tracker seems to use a different coordinate system compared to Face2Face.

Hi! We reached out to the author of NerFACE by sending the inputs and he provided the comparison.

Hi, thanks for your response. what does the “inputs” mean here ? Are they tracked rotation matrices (from the batch_rodrigues function) and [f, cx, cy] ?

Sorry for not being clear here. Inputs == only the video. The face2face tracking algorithm was used in this comparison.