feipanir/IntraDA

is there a batch size of 1 per gpu?

Closed this issue · 4 comments

Your research has inspired me very much. So I'm trying to reproduce this experiment, is there a batch size of 1 per gpu?

Yes, we use batch_size=1 during training process. More batches are allowed if the image size is smaller.

If so, did all hyper-lamatenors keep the config in ADVENT?

you are right, we use the same configuration as in AdvEnt.

Thank you for your kind answer. Your research has inspired me very much. Thank you for your great research.