facebookresearch/co-tracker

Regarding the Application of the generic features

sfchen94 opened this issue · 3 comments

Hello,

Thank you for your work.

I would like to ask if there are any plans to release a DINOv2 version of the model.

Additionally, have you tried using a larger DINOv2 backbone, such as ViT-B or ViT-L,
to see if it could help improve performance?

@nikitakaraevv
Thanks for the update.
It seems you ignore the training with the DINOv2 feature.

As section 3.4, 'Unrolled Window Training', mentions,
does this indicate that the batching problem has been resolved,
allowing for multiple batches on a single GPU? Since compute_sparse_tracks still force the batch to 1.

Hi @sfchen94,
We trained CoTracker with the smallest DINOv2 model, but it was not helping at all. I think DINO can help mostly with semantic correspondences by roughly identifying the corresponding point in another frame. However, it seems that semantic correspondences are not really needed if we have a continuous video, and not just a pair of images of the same object. We do not plan to release the model trained with DINOv2 features for now, but we will keep working on motion estimation, and will try to explore other approaches.

Yes, the batching problem is indeed solved! Thank you for pointing this out, I just removed assert B==1 from the predictor. You can train and run the model with different batch sizes now.

Hi @sfchen94,

We trained CoTracker with the smallest DINOv2 model, but it was not helping at all. I think DINO can help mostly with semantic correspondences by roughly identifying the corresponding point in another frame. However, it seems that semantic correspondences are not really needed if we have a continuous video, and not just a pair of images of the same object. We do not plan to release the model trained with DINOv2 features for now, but we will keep working on motion estimation, and will try to explore other approaches.

Yes, the batching problem is indeed solved! Thank you for pointing this out, I just removed assert B==1 from the predictor. You can train and run the model with different batch sizes now.

Cool.
I'm deeply grateful for your update.