zkcys001/UDAStrongBaseline

The result of GLT much lower than paper's

Closed this issue · 2 comments

Hi, I follow the readme to run the GLT code.
image
And the result mAP is 57.3% in duke->market task (10 iteration), which much lower than paper's 79.5%.
image
image
How can I do to improve the result like paper's?
Thanks for your attention.

Thanks for your attention. The result you showed is to use single-time K-means clustering. You can use K-means clustering many times to improve performance. I reproduced the GLT code last several weeks, but it seems to still have some latent bugs, and I will update it recently.

In fact, the DBSCAN can achieve higher performance. I suggest you try to use the stronger baseline or uncertainty model, which are based on DBSCAN~It takes less times than the K-means

In addition, you must use the 4 gpus to train the model.

Thank you for your reply. I will try it later.
Very nice job!
Looking forward to the upcoming updates~