NVIDIA/Megatron-LM

[QUESTION]What is the reason why bfloat16 requires gradient accumulation and allreductions need to be completed in fp32

huhu0823 opened this issue · 3 comments

Hi,I am currently studying the Megatron framework. I noticed that bfloat16 in megatron requires gradient accumulation, and allreductions need to be completed in fp32. Then the gradient communicates in fp32 format.
However, fp16 requires gradient accumulation, and allreductions can be completed in fp16.Then the gradient communicates in fp16 format.
I want to know what are the special reasons for these two different calculation methods

Line 159-160 of the megatron/arguments. py file

lQLPJxda3cZZmQjNApzNBeiwwTfxEK6Mif0E95eXr1raAA_1512_668

Marking as stale. No activity in 60 days.

Same question, anyone knows the reason?

Marking as stale. No activity in 60 days.