r-three/t-few

Multi-GPU Support

danielkorat opened this issue · 8 comments

Hello,

Have you tried training on Multi-GPU setup? I tried running your fine-tuning example like so:

export CUDA_VISIBLE_DEVICES=0,1
python -m src.pl_train -c t03b.json+ia3.json+rte.json -k load_weight="pretrained_checkpoints/t03b_ia3_finish.pt" exp_name=t03b_rte_seed42_ia3_pretrained100k few_shot_random_seed=42 seed=42

But I get errors in the lightning data loaders.

Any Ideas?
Thank you

Hi, @danielkorat you may try to set the "compute_strategy" as "ddp" or "deepspeed_stage_3"

I tried it, the code hangs after starting the experinent and then skipping it. Looks like a parallelization issue:

Start experiment t03b_rte_seed42_ia3_pretrained
{
    "exp_dir": "/store/code/t-few/exp_out/t03b_rte_seed42_ia3_pretrained",
    "exp_name": "t03b_rte_seed42_ia3_pretrained",
    ....
    ....
}
Skip finished experiment t03b_rte_seed42_ia3_pretrained

Hi, sorry for getting back late. Add allow_skip_exp=false to the command similar to https://github.com/r-three/t-few/blob/master/configs/t011b.json in order to run MultiGPU training.

Hi @muqeeth,

When I try compute_strategy: deepspeed_stage_3 (with (allow_skip_exp=false), I get the following error.
My goal is to be able to fit your model on my GPUs (Model Parallelization, not Data Parallelization).
I'm using 4 x Nvidia RTX with 24GB each. I'm using the package versions as they appear in requirements.txt.
My machine has 40 CPUs and 128 GB RAM.
I tried many deepspeed configurations. I suspect it's an issue related to integration of pytorch-lightning with deepspeed.

Thank you

Mark experiment t03b_rte_seed42_ia3_pretrained as claimed
initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
initializing deepspeed distributed: GLOBAL_RANK: 1, MEMBER: 2/2
[2022-06-15 13:21:21,093] [WARNING] [deepspeed.py:630:_auto_select_batch_size] Tried to infer the batch size for internal deepspeed logging from the `train_dataloader()`. To ensure DeepSpeed logging remains correct, please manually pass the plugin with the batch size, `Trainer(strategy=DeepSpeedPlugin(logging_batch_size_per_gpu=batch_size))`.
Reusing dataset super_glue (/home/dkorat/.cache/huggingface/datasets/super_glue/rte/1.0.2/d040c658e2ddef6934fdd97deb45c777b6ff50c524781ea434e7219b56a428a7)
Reusing dataset super_glue (/home/dkorat/.cache/huggingface/datasets/super_glue/rte/1.0.2/d040c658e2ddef6934fdd97deb45c777b6ff50c524781ea434e7219b56a428a7)
Train size 32
Eval size 277
Train size 32
Eval size 277
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
You have not specified an optimizer or scheduler within the DeepSpeed config. Using `configure_optimizers` to define optimizer and scheduler.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp.py:510: UserWarning: Error handling mechanism for deadlock detection is uninitialized. Skipping check.
  rank_zero_warn("Error handling mechanism for deadlock detection is uninitialized. Skipping check.")
Traceback (most recent call last):
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/store/code/t-few/src/pl_train.py", line 98, in <module>
    main(config)
  File "/store/code/t-few/src/pl_train.py", line 69, in main
    trainer.fit(model, datamodule)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1188, in _run
    self._pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1223, in _pre_dispatch
    self.accelerator.pre_dispatch(self)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 136, in pre_dispatch
    self.training_type_plugin.pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 389, in pre_dispatch
    self.init_deepspeed()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 459, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 492, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 430, in _setup_model_and_optimizer
    dist_init_required=False,
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/__init__.py", line 129, in initialize
    config_params=config_params)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 258, in __init__
    self._configure_distributed_model(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1066, in _configure_distributed_model
    self._broadcast_model()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 969, in _broadcast_model
    group=self.data_parallel_group)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1163, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Traceback (most recent call last):
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/store/code/t-few/src/pl_train.py", line 98, in <module>
    main(config)
  File "/store/code/t-few/src/pl_train.py", line 69, in main
    trainer.fit(model, datamodule)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1188, in _run
    self._pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1223, in _pre_dispatch
    self.accelerator.pre_dispatch(self)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 136, in pre_dispatch
    self.training_type_plugin.pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 389, in pre_dispatch
    self.init_deepspeed()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 459, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 492, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 430, in _setup_model_and_optimizer
    dist_init_required=False,
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/__init__.py", line 129, in initialize
    config_params=config_params)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 258, in __init__
    self._configure_distributed_model(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1066, in _configure_distributed_model
    self._broadcast_model()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 969, in _broadcast_model
    group=self.data_parallel_group)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1163, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
(tfew) dkorat@gpu-m7mm8:/store/code/t-few$ 

I am sorry for getting back late. I don't think I can fully resolve the problem. One thing is even though you have 4 GPUs, I think only 0 and 1 are used. May be export CUDA_VISIBLE_DEVICES=0,1,2,3.

The deepspeed, torch, and cuda versions in the requirements worked for us on A100s, A5000, and A6000 GPUs. I am not sure about other GPUs. Maybe @HaokunLiu can help?

I didn't meet the problem you posted. I worked on deepspeed for a while. And after digging though a lot of other problems, I can run the model with it. But it's usually very slow. So we didn't use it in our final experiments. Instead, we rent some 80GB A100 online. The experiments finished quickly, so it wasn't as expansive as it sounds.

Overall, I recommend using big GPUs+ddp rather than deepspeed, if possible.

I see, thanks for the info!