is there a batch size of 1 per gpu?
Closed this issue · 4 comments
seominseok0429 commented
Your research has inspired me very much. So I'm trying to reproduce this experiment, is there a batch size of 1 per gpu?
feipanir commented
Yes, we use batch_size=1 during training process. More batches are allowed if the image size is smaller.
seominseok0429 commented
If so, did all hyper-lamatenors keep the config in ADVENT?
feipanir commented
you are right, we use the same configuration as in AdvEnt.
seominseok0429 commented
Thank you for your kind answer. Your research has inspired me very much. Thank you for your great research.