OpenMOSS/CoLLiE

A100单卡跑llama2 finetune lora报错oom

Closed this issue · 2 comments

看log训练参数为4,194,304,按理单卡是可以跑的(使用alpaca-lora配置同样的target_modules=["q_proj", "v_proj"]可以正常跑),实际会oom,请问是哪里配置不对吗?

`trainable params: 4,194,304 || all params: 6,742,609,920 || trainable%: 0.06220594176090199
{'fp16': {'enabled': True}, 'monitor_config': {'enabled': True, 'tag': 'sophia_alpaca', 'csv_monitor': {'enabled': True, 'output_path': './ds_logs/', 'job_name': 'sophia_alpaca2023-11-22-08-05-18'}, 'tensorboard': {'enabled': False, 'job_name': 'sophia_alpaca'}, 'wandb': {'enabled': False, 'job_name': 'sophia_alpaca'}}, 'train_micro_batch_size_per_gpu': 1, 'gradient_accumulation_steps': 1}
[2023-11-22 08:05:24,893] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2023-11-22 08:05:24,894] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2023-11-22 08:05:24,894] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
[2023-11-22 08:05:24,923] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW
[2023-11-22 08:05:24,923] [INFO] [logging.py:96:log_dist] [Rank 0] Creating fp16 unfused optimizer with dynamic loss scale
[2023-11-22 08:05:24,923] [INFO] [unfused_optimizer.py:45:__init__] Fused Lamb Legacy : False
Traceback (most recent call last):
  File "finetune_llama_lora.py", line 137, in <module>
    trainer = Trainer(
  File "/home/sser/sunjiafei/llama/collie/collie/controller/trainer.py", line 199, in __init__
    self.setup_parallel_model()
  File "/home/sser/sunjiafei/llama/collie/collie/controller/trainer.py", line 295, in setup_parallel_model
    self.engine, self.optimizer, _, self.lr_scheduler = setup_ds_engine(
  File "/home/sser/sunjiafei/llama/collie/collie/utils/dist_utils.py", line 133, in setup_ds_engine
    engine, optimizer, _, lr_scheduler = initialize(
  File "/home/sser/sunjiafei/llama/collie/collie/utils/dist_utils.py", line 716, in initialize
    engine = DeepSpeedEngine(
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 304, in __init__
    self._configure_optimizer(optimizer, model_parameters)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 1221, in _configure_optimizer
    self.optimizer = self._configure_fp16_optimizer(basic_optimizer)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 1406, in _configure_fp16_optimizer
    optimizer = FP16_UnfusedOptimizer(
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/fp16/unfused_optimizer.py", line 62, in __init__
    fp32_group = [p.clone().float().detach() for p in param_group['params']]
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/fp16/unfused_optimizer.py", line 62, in <listcomp>
    fp32_group = [p.clone().float().detach() for p in param_group['params']]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 500.00 MiB (GPU 0; 39.59 GiB total capacity; 37.43 GiB already allocated; 31.19 MiB free; 38.16 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 17997) of binary: /usr/bin/python
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==2.1.0a0+fe05266', 'console_scripts', 'torchrun')())
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:`

optimizer = torch.optim.AdamW(filter(lambda p: p.requires_grad, model.parameters()), lr=args.lr) 优化器只需要穿需要grad的部分,不知道你有没有过滤?

requires_grad

之前跑的时候没过滤,直接用的原始代码,修改后可以了,多谢