cvlab-yonsei/LbA

Question about the training speed

Closed this issue · 1 comments

Thanks for your work.

When I tried to reproduce your results with an Nvidia 2080Ti (as recommended by the paper), however, the training speed seemed very slow. It nearly took 20 minutes for each epoch on SYSU-MM01, which mismatched with the reported 8 hours training time.

I have already used cuda for acceleration. Thus, I wonder how did this happen. Thank you.

Hello. Thank you for your interest in our work.

The data-loaders require computationally heavy preprocessing on CPUs, thus the training time may highly vary on your CPU model. FYI, we used intel i7 processors.

Also, when releasing our code, we valued interpretability over efficiency. Thus the training code that we used for the actual experiments slightly differ from this repository. For example:

  • implementing with less reshaping operations
  • using torch-based operations for computing feature similarity and matching probability (e.g. torch.cdist and torch.softmax)
  • omit unnecessary output computations within embed_net

We don’t have a plan to release the fast implementations, but you are more than welcome to try it yourself. Let us know when you run into any trouble by replying or sending me an e-mail!