Question about different results when trying to reproduce
FabianGebben opened this issue · 4 comments
Thank you for sharing your work, your paper was very interesting and the results are also very impressive!
I had a question regarding the evaluation on MSLS-val. I attempted to reproduce your results by following the repository, downloading the data, and training the model as described in the README. Initially, I trained the model solely on the MSLS dataset. I attempted to evaluate the results on MSLS by executing the following command for both my trained model and the provided trained model:
python3 eval.py --datasets_folder=/path/to/your/datasets_vg/datasets --dataset_name=msls --resume=/path/to/finetuned/msls/model/SelaVPR_msls.pth --rerank_num=100
However, these were the results that I obtained:
Model | R@1 | R@5 | R@10 |
---|---|---|---|
Claimed performance in README | 90.8 | 96.4 | 97.2 |
Self-trained model | 87.0 | 94.0 | 95.6 |
Downloaded model | 86.6 | 93.8 | 95.6 |
Further fine-tuning the model on Pitts30k and evaluating it gave the same results as you had in your README for evaluation on Pitts30k. Therefore, I'm wondering if you could help me understand why there's a difference for the MSLS-val. Am I evaluating with the wrong data, or is there something else I might be missing?
Hello, thanks for your interest in our work. I guess you used this repository (https://github.com/gmberton/VPR-datasets-downloader) to format the MSLS dataset and used all query images (about 11k) in MSLS-val for testing. However, the official version of MSLS val (https://github.com/mapillary/mapillary_sls) only contains 740 query images (i.e. a subset). The vast majority of VPR works use the official version of MSLS-val for testing. You can get these 740 query images through the official repository, or get the key (name) of these images here.
Yes, I indeed used that repository to format the dataset and used all the validation query images for testing. I did not know about the different subset of MSLS that is typically used for MSLS-val. This will most likely resolve the differences that I encountered. I'll use the official MSLS-val subset for testing as advised. Thank you so much for the help!
Hello, thanks for your interest in our work. I guess you used this repository (https://github.com/gmberton/VPR-datasets-downloader) to format the MSLS dataset and used all query images (about 11k) in MSLS-val for testing. However, the official version of MSLS val (https://github.com/mapillary/mapillary_sls) only contains 740 query images (i.e. a subset). The vast majority of VPR works use the official version of MSLS-val for testing. You can get these 740 query images through the official repository, or get the key (name) of these images here.
The keys (names) of official MSLS-val images aren't follow the form "path/to/file/@utm_easting@utm_northing@...@.jpg", which is used in the code. Could you kindly give the names like "path/to/file/@utm_easting@utm_northing@...@.jpg"?
Hello, thanks for your interest in our work. I guess you used this repository (https://github.com/gmberton/VPR-datasets-downloader) to format the MSLS dataset and used all query images (about 11k) in MSLS-val for testing. However, the official version of MSLS val (https://github.com/mapillary/mapillary_sls) only contains 740 query images (i.e. a subset). The vast majority of VPR works use the official version of MSLS-val for testing. You can get these 740 query images through the official repository, or get the key (name) of these images here.
The keys (names) of official MSLS-val images aren't follow the form "path/to/file/@utm_easting@utm_northing@...@.jpg", which is used in the code. Could you kindly give the names like "path/to/file/@utm_easting@utm_northing@...@.jpg"?
@HUSTNO1WXY Hello, you can directly run the code in VPR-datasets-downloader to format the image name.