NVIDIA/Megatron-LM

[BUG] The bug about the options of the Megatron-core, transformer-impl and flash-attention.

Opened this issue · 2 comments

Describe the bug
Open --use-mcore-models and --use-flash-attn, set --transformer-impl local, and do not use flash-attention.

To Reproduce
N/A

Expected behavior
N/A

Stack trace/logs
N/A

Environment (please complete the following information):

Proposed fix
N/A

Additional context
N/A

when you use --use-mcore-models,, you cannot use local. --use-flash-attn decides whether to use the OSS flash attention implmentation or cudnn implmementation.

when you use --use-mcore-models,, you cannot use local. --use-flash-attn decides whether to use the OSS flash attention implmentation or cudnn implmementation.

hi @ethanhe42 ,I understand the process you mentioned, but currently there is a task warning in the configuration options, which is not very user-friendly.