j96w/DenseFusion

Help with running object tracking on new videos

balevin opened this issue · 1 comments

I am trying to use this model for a research project on object tracking on videos I have taken (of objects in the YCB dataset). I am struggling however to use this model as it seems that what gets passed into the original estimator (points, choose) relies on metadata baked into the camera and other features/ground truth data used for training/testing whereas I simply want to use it for real life object tracking. I was hoping for some help/understanding how to run this object tracking on my own video o(of YCB objects so should not need any training, already separated into frames of depth and rgb images). Any help would be greatly appreciated.

Hey @balevin,
I'm facing exactly the same problem right now. Any update that you could eventually share ? :)
Thanks !