[QUESTION]What is the reason why bfloat16 requires gradient accumulation and allreductions need to be completed in fp32
huhu0823 opened this issue · 3 comments
huhu0823 commented
Hi,I am currently studying the Megatron framework. I noticed that bfloat16 in megatron requires gradient accumulation, and allreductions need to be completed in fp32. Then the gradient communicates in fp32 format.
However, fp16 requires gradient accumulation, and allreductions can be completed in fp16.Then the gradient communicates in fp16 format.
I want to know what are the special reasons for these two different calculation methods
Line 159-160 of the megatron/arguments. py file
github-actions commented
Marking as stale. No activity in 60 days.
Boreaso commented
Same question, anyone knows the reason?
github-actions commented
Marking as stale. No activity in 60 days.