Sam1224/SCCAN

trainning detail

King-king424 opened this issue · 1 comments

Thanks for sharing your work.
I have a small question.
Your paper indicates that the batch size is set to 8, but when using distributed training in the 5-shot setting (such as the figure below), doesn't the real batch size become 8*4=32?
image
image

Thanks for your interests in our work!

Sorry but I've made a mistake in meta-training part of README, the correct version should be: we use 1 GPU for all experiments (including both 1-shot and 5-shot) related to PASCAL-5i, and 4 GPUs for COCO-20i.
The batch size is 8 when using 1 GPU, and it is decreased to 2 when using 4 GPUs (so that each batch would still have 8 samples in total, and the learning rate does not need to be changed)
I checked all .yaml files again, and found the batch size in pascal_split0_resnet50.yaml is set to 4 (when conducting experiments in Appendix A.3, and I forgot to change it back), it should be 8 for training and I have updated it.
The configurations in all .yaml files should be correct now.

Thanks for pointing out this!