AetherCortex/Llama-X

About the training strategy

SparkJiao opened this issue · 7 comments

Very nice project and appreciate your contribution!

I have seen the deepspeed config and I want to confirm the current training strategy. For LLaMA-13B, the training uses Zero-3 optimization, checkpointing, and CPU-offload, right? I'm curious if you have tried tensor parallel (used in original LLaMA training) or model parallel?

We would also love to contribute to the training implementation about model parallel for fast large scale training, aiming at models greater than 13B. Currently I'm investigating torch/fairscale pipeline parallel mechanism.

Best,
Fangkai

Thank you for your interest and attention to our project. 

For LLaMA-13B, the training uses Zero-3 optimization, checkpointing, and CPU-offload. Currently, there is a NCCL Error when using this code on multi-node systems with 33B and 65B. We also tried (used in original LLaMA training) model parallel, but there also exists a gradient explosion on multi-node sometimes. We are trying to solve these issues. 

And we welcome contributors to help solve this issue. If you can implement stable multi-node training with 33B and 65B based on this codebase or any other framework, you can check in your code, and after our verification, we can merge it into the main code.

@AetherCortex Hi, may I ask if batch size 64 is too larget for 8 V100 GPUs? I tried bs=64, gradient_update=1; bs=32, gradient_update=2; both have OOM error. All the other training settings follow this repo. Any suggestions?

@AetherCortex Hi, may I ask if batch size 64 is too larget for 8 V100 GPUs? I tried bs=64, gradient_update=1; bs=32, gradient_update=2; both have OOM error. All the other training settings follow this repo. Any suggestions?

The specific batchsize definitely depends on your environment, as long as everything is correct, this number should not be much different.

Thank you for your interest and attention to our project.

For LLaMA-13B, the training uses Zero-3 optimization, checkpointing, and CPU-offload. Currently, there is a NCCL Error when using this code on multi-node systems with 33B and 65B. We also tried (used in original LLaMA training) model parallel, but there also exists a gradient explosion on multi-node sometimes. We are trying to solve these issues.

And we welcome contributors to help solve this issue. If you can implement stable multi-node training with 33B and 65B based on this codebase or any other framework, you can check in your code, and after our verification, we can merge it into the main code.

Does V100 support bf16? It is obviously much more numerically stable than fp16 though I don't quite understand in what way would multi-node training differ from single-node. Maybe it switches precision in optimizer to reduce amount of information passed?

@AetherCortex Hi, may I ask if batch size 64 is too larget for 8 V100 GPUs? I tried bs=64, gradient_update=1; bs=32, gradient_update=2; both have OOM error. All the other training settings follow this repo. Any suggestions?

The specific batchsize definitely depends on your environment, as long as everything is correct, this number should not be much different.

@AetherCortex Does bs=64 mean per_device or global? I used a similar configuration (zero3, checkpointing, cpu offload, global_bs=64 on 8 * V100 32G) in my project but my training speed is only 1/3 of what you described.

Thank you for your interest and attention to our project.
For LLaMA-13B, the training uses Zero-3 optimization, checkpointing, and CPU-offload. Currently, there is a NCCL Error when using this code on multi-node systems with 33B and 65B. We also tried (used in original LLaMA training) model parallel, but there also exists a gradient explosion on multi-node sometimes. We are trying to solve these issues.
And we welcome contributors to help solve this issue. If you can implement stable multi-node training with 33B and 65B based on this codebase or any other framework, you can check in your code, and after our verification, we can merge it into the main code.

Does V100 support bf16? It is obviously much more numerically stable than fp16 though I don't quite understand in what way would multi-node training differ from single-node. Maybe it switches precision in optimizer to reduce amount of information passed?

V100 does not support bf16. Maybe there is a precision problem in communication or bug in deepspeed

Thank you for your interest and attention to our project. 

For LLaMA-13B, the training uses Zero-3 optimization, checkpointing, and CPU-offload. Currently, there is a NCCL Error when using this code on multi-node systems with 33B and 65B. We also tried (used in original LLaMA training) model parallel, but there also exists a gradient explosion on multi-node sometimes. We are trying to solve these issues. 

And we welcome contributors to help solve this issue. If you can implement stable multi-node training with 33B and 65B based on this codebase or any other framework, you can check in your code, and after our verification, we can merge it into the main code.

Have you solved this problem of training on multi-node systems?