About implementation details
Closed this issue · 2 comments
Hi. Thank you for your great work.
I was wondering why is the LUT_SIZE=6015
when the target domain is set to CUHK-SYSU?
Also the CUHK-SYSU(11,206) has more scene images than PRW(5,704).
If the batch size is set to 4 (2 for CUHK and 2 for PRW),
in each epoch are there some images in CUHK-SYSU that are not fed into the model for training?
Hi, @YangJae96 , thank you for your concern. I'm sorry that the code is still not well cleared up due to some recent ddls, but made open source under the requests of ECCV rules, and that may cause some confusion.
The LUT_SIZE in config file is not used in our model, as can be seen in line69,95 of train_da_dy_cluster.py, memory is not inited with LUT_SIZE parameter, you can delete it from config files.
In line 87 of engine.py, it can be observed that some images of CUHK-SYSU are not fed for training for each epoch, an alternative implementation is setting a fixed number of iterations for each epoch (like the implementation by SPCL), I tried this strategy in the earlier stage of my experiments, and observe very close performance.
Oh I see. Thank you for detail explanation.