AlibabaResearch/AdvancedLiterateMachinery

training schedule of LayoutLLM

jpWang opened this issue · 1 comments

Hi,
thanks for the great work LayoutLLM~ Congratulations!
And I just want to ask that, in the appendix of the paper, it said that "the batch size is 32 for each GPU" in pre-training and "the batch size is 8 for each GPU" in fine-tuning:
image
So what and how many gpus are you using? Like 8xA100-80G? So it results in the pre-training batch size as 32x8=256 and the fine-tuning batch size as 8x8=64?
Looking forward to your reply~And thanks again. @luochuwei

Hi, thanks for the great work LayoutLLM~ Congratulations! And I just want to ask that, in the appendix of the paper, it said that "the batch size is 32 for each GPU" in pre-training and "the batch size is 8 for each GPU" in fine-tuning: image So what and how many gpus are you using? Like 8xA100-80G? So it results in the pre-training batch size as 32x8=256 and the fine-tuning batch size as 8x8=64? Looking forward to your reply~And thanks again. @luochuwei

Yes, you're right