Visualization of DenseFusion pred, target point collections to 2D image [unable to reproduce paper's true dis error]
Weathereds opened this issue · 0 comments
Hi,
In your Linemod testing example, you create a predictive model with the result pred
, consisting of my_r
and my_t
. This prediction is compared to the reference, target
, a similarly constructed collection of points of the object model used.
I am attempting to reproduce the figures of your paper, in where these points are visually projected onto the images of the 2D RGB data. Using your paper's predictive model and downloaded trained checkpoints, the predictions do not visually line up with their 2D RGB targets.
I am using OpenCV's projectpoints method, in the form projectpoints(model_points, my_r, my_t, cam_mat)
, where the camera matrix is the values of camera intrinsic provided in dataset.py.
As you can see, while the error between predicted and target points is low, the true error to the object in scene is high. Is there a more accurate way of representing your paper's visualization, or is the predictive model used as an example incorrect? I could not find any reference or material explaining the method used to visualize results in either the DenseFusion paper or code repository.
Thank you.