CyberAgentAILab/layout-dm

StopIteration error when train from scratch on rico25

jiangzhengguo opened this issue · 5 comments

when I train from scratch using rico25 dataset, I get a error as below. It seem that self.parameters() is not OK if using DataParallel? would you give me some help, thank you please!

DATA_DIR=./download/datasets
JOB_DIR=tmp/jobs/rico25/layoutdm_20230328021904
ADDITIONAL_ARGS=
[2023-03-28 02:19:07,932][HYDRA] Launching 1 jobs locally
[2023-03-28 02:19:07,932][HYDRA] #0 : +experiment=layoutdm fid_weight_dir=./download/fid_weights/FIDNetV3 job_dir=tmp/jobs/rico25/layoutdm_20230328021904 dataset=rico25 dataset.dir=./download/datasets data.num_workers=16 seed=0
2023-03-28 02:19:08.108612: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-28 02:19:08.887765: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /data1/zhengguojiang/anaconda3/lib:/usr/local/cuda-11.3/lib64:/data1/zhengguojiang/anaconda3/lib:/usr/local/cuda-11.3/lib64:
2023-03-28 02:19:08.887864: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /data1/zhengguojiang/anaconda3/lib:/usr/local/cuda-11.3/lib64:/data1/zhengguojiang/anaconda3/lib:/usr/local/cuda-11.3/lib64:
2023-03-28 02:19:08.887885: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[2023-03-28 02:19:09,286][main][INFO] - {'backbone': {'target': 'trainer.models.transformer_utils.TransformerEncoder', 'encoder_layer': {'target': 'trainer.models.transformer_utils.Block', 'd_model': 512, 'nhead': 8, 'dim_feedforward': 2048, 'dropout': 0.0, 'batch_first': True, 'norm_first': True, 'timestep_type': 'adalayernorm', 'diffusion_step': 100}, 'num_layers': 4}, 'dataset': {'target': 'trainer.datasets.rico.Rico25Dataset', 'partial': True, 'dir': './download/datasets', 'max_seq_length': 25}, 'data': {'batch_size': 64, 'bbox_quantization': 'kmeans', 'num_bin_bboxes': 32, 'num_workers': 16, 'pad_until_max': True, 'shared_bbox_vocab': 'x-y-w-h', 'special_tokens': ['pad', 'mask'], 'transforms': ['RandomOrder'], 'var_order': 'c-x-y-w-h'}, 'model': {'target': 'trainer.models.layoutdm.LayoutDM', 'partial': True, 'q_type': 'constrained'}, 'optimizer': {'target': 'torch.optim.AdamW', 'partial': True, 'lr': 0.0005, 'betas': [0.9, 0.98]}, 'sampling': {'temperature': 1.0, 'name': 'random'}, 'scheduler': {'target': 'torch.optim.lr_scheduler.ReduceLROnPlateau', 'partial': True, 'mode': 'min', 'factor': 0.5, 'patience': 2, 'threshold': 0.01}, 'training': {'epochs': 50, 'grad_norm_clip': 1.0, 'weight_decay': 0.1, 'loss_plot_iter_interval': 50, 'sample_plot_epoch_interval': 1, 'fid_plot_num_samples': 1000, 'fid_plot_batch_size': 512}, 'job_dir': 'tmp/jobs/rico25/layoutdm_20230328021904', 'fid_weight_dir': './download/fid_weights/FIDNetV3', 'seed': 0, 'device': 'cuda', 'debug': False}
[2023-03-28 02:19:09,333][trainer.helpers.layout_tokenizer][INFO] - N_total=155, (N_label, N_bbox, N_sp_token)=(25,128,2)
[2023-03-28 02:19:09,477][trainer.models.base_model][INFO] - number of parameters: 1.242499e+01
Error executing job with overrides: ['+experiment=layoutdm', 'fid_weight_dir=./download/fid_weights/FIDNetV3', 'job_dir=tmp/jobs/rico25/layoutdm_20230328021904', 'dataset=rico25', 'dataset.dir=./download/datasets', 'data.num_workers=16', 'seed=0']
Traceback (most recent call last):
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/main.py", line 299, in
main()
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/main.py", line 95, in decorated_main
config_name=config_name,
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/utils.py", line 396, in _run_hydra
overrides=overrides,
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/utils.py", line 461, in _run_app
lambda: hydra.multirun(
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/utils.py", line 216, in run_and_report
raise ex
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/utils.py", line 213, in run_and_report
return func()
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/utils.py", line 464, in
overrides=overrides,
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 162, in multirun
ret = sweeper.sweep(arguments=task_overrides)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/_internal/core_plugins/basic_sweeper.py", line 182, in sweep
_ = r.return_value
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/main.py", line 106, in main
train_loss = train(model, train_dataloader, optimizer, cfg, device, writer)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/main.py", line 227, in train
outputs, losses = model(batch)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/models/layoutdm.py", line 70, in forward
outputs, losses = self.model(inputs["seq"])
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise
raise exception
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/data1/zhengguojiang/anaconda3/envs/layoutdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/models/categorical_diffusion/constrained.py", line 245, in forward
self.converter.to(self.device)
File "/data1/zhengguojiang/YouTu/layout-origin-dm/src/trainer/trainer/models/categorical_diffusion/base.py", line 113, in device
return next(self.transformer.parameters()).device
StopIteration

Sorry, I haven't met this type of error; Do you let me know the exact command to launch the training?

I got the same error. It seems that the transformer block in the ConstrainedMaskAndReplaceDiffusion got no parameters when deploying forward. I still don't know why this error occurs

Sorry, I haven't met this type of error; Do you let me know the exact command to launch the training?

I just use the command "bash bin/train rico25 layoutdm". I guess the DataParallel code in layoutdm.py doesn't work for multi gpu.

Thank you for reporting. I will make sure to fix it later.
I guess a quick workaround is to use CUDA_VISIBLE_DEVICES=<GPU_ID> to limit the number of GPUs visible, since our model is small (training can be done using a single T4 (16GB)).

Thank you for reporting. I will make sure to fix it later. I guess a quick workaround is to use CUDA_VISIBLE_DEVICES=<GPU_ID> to limit the number of GPUs visible, since our model is small (training can be done using a single T4 (16GB)).
Thanks, that works. I run "export CUDA_VISIBLE_DEVICES=0" at the root of Layout-dm to specify the GPU. And there are indeed some problems when multi GPUs are used for training