ymcui/Chinese-LLaMA-Alpaca-2

6卡指令精调,报错oom

afezeriaWrnbbmm opened this issue · 4 comments

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
  • 我已阅读项目文档FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
  • 第三方插件问题:例如llama.cppLangChaintext-generation-webui等,同时建议到对应的项目中查找解决方案。

问题类型

模型训练与精调

基础模型

Chinese-Alpaca-2 (7B/13B)

操作系统

Linux

详细描述问题

6张3090指令精调,按照指令精调脚本Wiki设置,报错
torchrun --nnodes 1 --nproc_per_node 6 run_clm_sft_with_peft.py
--deepspeed ${deepspeed_config_file}
--model_name_or_path ${pretrained_model}
--tokenizer_name_or_path ${chinese_tokenizer_path}
--dataset_dir ${dataset_dir}
--per_device_train_batch_size ${per_device_train_batch_size}
--per_device_eval_batch_size ${per_device_eval_batch_size}
--do_train
--do_eval
--seed $RANDOM
--fp16
--num_train_epochs 1
--lr_scheduler_type cosine
--learning_rate ${lr}
--warmup_ratio 0.03
--weight_decay 0
--logging_strategy steps
--logging_steps 10
--save_strategy steps
--save_total_limit 3
--evaluation_strategy steps
--eval_steps 100
--save_steps 200
--gradient_accumulation_steps ${gradient_accumulation_steps}
--preprocessing_num_workers 8
--max_seq_length ${max_seq_length}
--output_dir ${output_dir}
--overwrite_output_dir
--ddp_timeout 30000
--logging_first_step True
--lora_rank ${lora_rank}
--lora_alpha ${lora_alpha}
--trainable ${lora_trainable}
--lora_dropout ${lora_dropout}
--torch_dtype float16
--validation_file ${validation_file}
--save_safetensors False
--ddp_find_unused_parameters False

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况(请粘贴在本代码块里)

运行日志或截图

[INFO|modeling_utils.py:1400] 2024-03-21 07:29:09,883 >> Instantiating LlamaForCausalLM model under default dtype torch.float16.
[INFO|configuration_utils.py:845] 2024-03-21 07:29:09,885 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}

Loading checkpoint shards: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 2/3 [00:11<00:05, 5.64s/it]
Traceback (most recent call last):
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 513, in
main()
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 405, in main
model = LlamaForCausalLM.from_pretrained(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 384, in set_module_tensor_to_device
new_value = value.to(device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 23.69 GiB of which 41.69 MiB is free. Including non-PyTorch memory, this process has 23.64 GiB memory in use. Of the allocated memory 23.29 GiB is allocated by PyTorch, and 1.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Loading checkpoint shards: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 2/3 [00:11<00:05, 5.72s/it]
Traceback (most recent call last):
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 513, in
main()
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 405, in main
model = LlamaForCausalLM.from_pretrained(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 384, in set_module_tensor_to_device
new_value = value.to(device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 1 has a total capacity of 23.69 GiB of which 41.69 MiB is free. Including non-PyTorch memory, this process has 23.64 GiB memory in use. Of the allocated memory 23.29 GiB is allocated by PyTorch, and 1.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Loading checkpoint shards: 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 2/3 [00:12<00:06, 6.37s/it]
Traceback (most recent call last):
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 513, in
main()
File "/home/nvidia3090/Chinese-LLaMA-Alpaca-2/scripts/training/run_clm_sft_with_peft.py", line 405, in main
model = LlamaForCausalLM.from_pretrained(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 384, in set_module_tensor_to_device
new_value = value.to(device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 2 has a total capacity of 23.69 GiB of which 41.69 MiB is free. Including non-PyTorch memory, this process has 23.64 GiB memory in use. Of the allocated memory 23.29 GiB is allocated by PyTorch, and 1.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[2024-03-21 07:29:23,629] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1562212 closing signal SIGTERM
[2024-03-21 07:29:23,630] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1562213 closing signal SIGTERM
[2024-03-21 07:29:23,630] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1562214 closing signal SIGTERM
[2024-03-21 07:29:23,630] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1562215 closing signal SIGTERM
[2024-03-21 07:29:25,362] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 1562210) of binary: /root/miniconda3/bin/python
Traceback (most recent call last):
File "/root/miniconda3/bin/torchrun", line 8, in
sys.exit(main())
File "/root/miniconda3/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 347, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/lib/python3.9/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/root/miniconda3/lib/python3.9/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/root/miniconda3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 135, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

run_clm_sft_with_peft.py FAILED

Failures:
[1]:
time : 2024-03-21_07:29:23
host : nvidia3090
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 1562211)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2024-03-21_07:29:23
host : nvidia3090
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 1562210)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

先试一下单卡能不能加载完模型,如果能加载,则是因为内存(不是显存)不足。

我有188G内存 应该不是这个原因吧

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.