Multi-gpu issue for train.py script
Closed this issue ยท 2 comments
gianscarpe commented
๐ Bug
I got the following error when launching the multi-gpu scripts. I'll investigate this a bit more
File "train.py", line 10, in hydra_entry main(cfg)
File "/home/gianscarpe/dev/lightning-transformers/lightning_transformers/cli/train.py", line 70, in
main run( File "/home/gianscarpe/dev/lightning-transformers/lightning_transformers/cli/train.py", line 61, in
run
trainer.fit(model, datamodule=data_module)
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 107, in start_training
mp.spawn(self.new_process, **self.mp_spawn_kwargs)
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/gianscarpe/.local/share/virtualenvs/lightning-transformers-iDRRdFSW/lib/python3.8/site-
packages/torch/multiprocessing/spawn.py", line 179, in start_processes
process.start()
File "/usr/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/usr/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'
To Reproduce
Steps to reproduce the behavior:
Run
python train.py \
task=nlp/translation \
dataset=nlp/translation/wmt16 \
backbone.pretrained_model_name_or_path=google/mt5-base \
trainer.gpus=2
Code sample
Expected behavior
Environment
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (
conda
,pip
, source): source - Build command you used (if compiling from source):
- Python version: 3.8
- CUDA/cuDNN version: 11.2
- GPU models and configuration: 2x Titxan Xpascal
- Any other relevant information:
Additional context
Seems releated to #164
SeanNaren commented
Thanks for checking out the repo :) do you mind trying master? I just merged the related PR #164 which should fix this issue!
gianscarpe commented
I confirmed the issue is fixed :)