apply lpmm.optim.AdamW to transformers trainer for multiple gpus training -> error
Yeojoon opened this issue · 5 comments
Hi, thank you for the interesting idea and very helpful implementation! Actually, I tried to apply lpmm.optim.AdamW to transformers trainer for multiple gpus training but got an error below.
lib/python3.10/site-packages/accelerate/utils/operations.py", line 167, in send_to_device
return tensor.to(device, non_blocking=non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
Doesn't your current code support the multiple gpus training? Thanks!
Also, it seems like your code does not support torch.bfloat16?
Hi, thank you for your question. Among the reported experiments in our paper, including image classification, machine translation, GPT-2 fine-tuning, and LLaMA fine-tuning, all were conducted in multi-GPU settings. Therefore, the error may depend on various complex factors, such as the version of transformers, the type of GPU, the pretrained model used, and so on.
Regarding torch.bfloat16: our 4-bit optimizers are compatible with torch.cuda.amp, where forward and backward computations are carried out in 16-bit, while optimizer states are stored in 4-bit. In this case, the 32-bit weights still need to be stored, and the optimizer state update are performed in 32-bit. This also applies to LLaMA fine-tuning. In general, our 4-bit optimizers do not change parameter dtype, thus not affect the forward and backward computations. The optimizer state update may be performed in 32-bit but this step is cheap. And finally, the optimizer states are stored in 4-bit.
I see. Thank you for your kind and detailed explanation!
I have one more quick question! Currently, the default setting of the second moment quantization (_C.QUANT.SQM) is 'group' for normalization and 'power-1' for mapping. But, in your paper, you used 'rank1' for normalization and 'linear' for mapping. Do I need to change this default setting?
You could pass the qconfig
argument that defines quantization setting to 4-bit optimizers. To use 'rank1' normalization and 'linear' (equivalent to 'power1') mapping for second moment, you could follow this:
optimizer = lpmm.optim.AdamW(
parameters,
qconfig="path/to/lpmm/configs/default.yml",
)
If the qconfig
argument is set to None, the optimizer will use the setting defined in config.py
, just as you mentioned. Also, you could use different provided qconfig files to modify the quantization setting.
Gotcha! Thank you for the explanation. It was very helpful!