bitsandbytes-foundation/bitsandbytes

dequantize_4bit() gives wrong output when working in cuda graph mode

chenqianfzh opened this issue · 4 comments

System Info

Linux

Reproduction

I am trying to implement BitsAndBytes in vLLM (https://github.com/vllm-project/vllm). My implementation with eager-mode works right and was merged.

However, I found that the weight given by dequantize_4bit() under cuda graph mode is different from the eager mode, which makes the model output nonsense output.

Wonder anybody has some insights on this issue?

I tried to put it in a simple script. Yet it turned out to be hard as it is non-trivial to capture the cuda graph. Yet it is a consistent repro and I would be more than happy to work with the community members to show the data I have collected.

Expected behavior

The cuda graph mode is expected to output the same dequantized tensors as the eager mode.

Thank you for bringing this to our attention @chenqianfzh! I'm not personally aware of a known issue here and do believe it's worth investigating further. If you could help to provide some more details on repro steps that would be appreciated!

For bookkeeping, this relates to vLLM issue: vllm-project/vllm#5569 and the current workaround is to enforce eager mode: vllm-project/vllm#6846

cc: @Titus-von-Koeller

@matthewdouglas @chenqianfzh

I also encountered the same problem mentioned above. I conducted a simple investigation, and the likely cause seems to be that the kernel kDequantizeBlockwise did not pass the stream(This phenomenon is common in BNB). If you want to investigate further, you can refer to cudagraph test for relevant verification.

hi @matthewdouglas vllm is waiting for a new release to pick up this fix, since they use pypi, wondering when you plan to checkpoint a new release?

@devlup We're planning release early this week.