xuyuzhuang11/OneBit

2nd stage training (running deepspeed training for scripts/llama2_7b.sh) got stuck at step5000, is this the expected behavior?

Closed this issue · 5 comments

Dear authors,

I ran the second stage deepspeed training on a server with 8 A100/80GB, but the training seems to get stuck after saving step 5000 checkpoints:
[2024-06-26 04:16:12,291] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_model_states.pt [2024-06-26 04:16:12,292] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_model_states.pt... [2024-06-26 04:16:12,370] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_model_states.pt. [2024-06-26 04:16:15,567] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_optim_states.pt... [2024-06-26 04:16:33,652] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_optim_states.pt. [2024-06-26 04:16:33,702] [INFO] [engine.py:3381:_save_zero_checkpoint] zero checkpoint saved /ssd/name/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5000/global_step5000/zero_pp_rank_0_mp_rank_00_optim_states.pt [2024-06-26 04:16:33,970] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step5000 is ready now!

The GPUs are still running 100% but there is no more logging information. I wonder if the check point 5000 is enough to duplicate your result or I am expected to run longer.

Thank you!

Hi! I have noticed your question. This problem you are facing may be not a expected result and training only 5000 steps may not be enough to reproduce our work. Could you please present more about this?

I modified scripts/llama2_7b.sh line #45 from --save_steps 5000 \ to --save_steps 5 \ and the logging got stuck at step 5, as follows:

[2024-07-06 19:35:29,611] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_model_states.pt [2024-07-06 19:35:29,611] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_model_states.pt... [2024-07-06 19:35:29,745] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_model_states.pt. [2024-07-06 19:35:34,874] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_optim_states.pt... [2024-07-06 19:36:06,152] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_optim_states.pt. [2024-07-06 19:36:06,186] [INFO] [engine.py:3381:_save_zero_checkpoint] zero checkpoint saved /ssd/hwu/code/OneBit/scripts/ckpt_Llama-2-7b-hf/checkpoint-5/global_step5/zero_pp_rank_0_mp_rank_00_optim_states.pt [2024-07-06 19:36:06,219] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step5 is ready now!

I guess the problem comes from checkpoint saving. But I have found any solution yet!

I was able to get through the model saving problem by adding { "zero_optimization": { "stage3_gather_16bit_weights_on_model_save": true } } in deepspeed config (ref).

But afterwards, I encounter another problem:
File "/ssd/hwu/code/OneBit/llama_factory/llamafactory/kd.py", line 50, in compute_loss teacher_outputs = model.teacher_model( File "/home/hwu/anaconda3/envs/onebit/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 466, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'DeepSpeedEngine' object has no attribute 'teacher_model'

OK in "modeling_utils.py" line #4523, the code

model_to_save = unwrap_model(self) 
if hasattr(model_to_save, "teacher_model"):
    del model_to_save.teacher_model

already deleted teacher_model when save_pretrained() was executed, so the only option left is to save model at the last training epoch (or save model when deepspeed hit 5k iterations, get out, load previous saved checkpoint then continue another 5k).

OK! I try to reproduce this bug but I failed to do this. Since this problem was settled, I will close this issue. I think this problem may be settled in our future upgradation! 3Q!