jzhang38/EasyContext

Need a running script for ‘dist_flash_attn’

Opened this issue · 5 comments

Can you provide a script to run dist_flash_attn? I tried setting parallel_mode to dist_flash_attn but it didn't work successfully.

When trying to use 'dist_flash_attn' with 2*A100, process 0 is stuck in torch.cuda.synchronize() of _lightseq_forward of a certain decoderlayer, while process 1 runs to this step of the next decoderlayer. Strangely, the model gets stuck on the second sample. What might be causing this bug? Is there any way to solve this problem?

It seems that communication of process 0 in maybe_send_recv_fwd_qkvo is not completed.

Well, After making the input sequence length divisible by world_size * block_size, it can run normally.

Well, After making the input sequence length divisible by world_size * block_size, it can run normally.

What is block_size?

Well, After making the input sequence length divisible by world_size * block_size, it can run normally.

What is block_size?

the block_size for flash-attn

Well, After making the input sequence length divisible by world_size * block_size, it can run normally.

What is block_size?

the block_size for flash-attn

I'm sorry I don't understand. I didn't find any block_size parameter in this repo. Could you please tell me where is it?

Well, After making the input sequence length divisible by world_size * block_size, it can run normally.

What is block_size?

the block_size for flash-attn

I'm sorry I don't understand. I didn't find any block_size parameter in this repo. Could you please tell me where is it?

seems here.