open-mmlab/mmengine

[Bug] `scale_lr()` cannot be called after `ParamScheduler` in DDPStrategy using `FlexibleRunner`.

SCZwangxiao opened this issue · 2 comments

Prerequisite

Environment

OrderedDict([('sys.platform', 'linux'), ('Python', '3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0]'), ('CUDA available', True), ('numpy_random_seed', 2147483648), ('GPU 0,1', 'NVIDIA A800-SXM4-80GB'), ('CUDA_HOME', '/usr/local/cuda'), ('NVCC', 'Cuda compilation tools, release 12.1, V12.1.105'), ('GCC', 'x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0'), ('PyTorch', '2.0.1+cu117'), ('PyTorch compiling details', 'PyTorch built with:\n - GCC 9.3\n - C++ Version: 201703\n - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.7\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.5\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n'), ('TorchVision', '0.15.2+cu117'), ('OpenCV', '4.8.1'), ('MMEngine', '0.9.1')])

Reproduces the problem - code sample

Modify the following settings in any config:

runner_type = 'FlexibleRunner'
strategy = dict(
    type='DDPStrategy',
    auto_scale_lr=dict(enable=True, base_batch_size=1024)
)

Reproduces the problem - command or script

bash tools/dist_train.sh <path to above config> 2

Reproduces the problem - error message

11/18 10:55:51 - mmengine - INFO - paramwise_options -- ln_vision.bias:weight_decay=0.0
11/18 10:55:52 - mmengine - INFO - paramwise_options -- llama_proj.bias:weight_decay=0.0
11/18 10:55:52 - mmengine - INFO - LR is set based on batch size of 1024 and the current batch size is 64. Scaling the original LR by 0.0625.
Traceback (most recent call last):
  File "tools/train.py", line 144, in <module>
    main()
  File "tools/train.py", line 140, in main
    runner.train()
  File "/xxxx/xxxx/xxxxxx/xxxx/engine/runner/xxxx_runner.py", line 161, in train
    self.strategy.prepare(
  File "/usr/local/lib/python3.8/dist-packages/mmengine/_strategy/single_device.py", line 72, in prepare
    self._scale_lr()
  File "/usr/local/lib/python3.8/dist-packages/mmengine/_strategy/base.py", line 711, in _scale_lr
    raise RuntimeError('`scale_lr` should be called before building '
RuntimeError: `scale_lr` should be called before building ParamScheduler because ParamScheduler will store initial lr from optimizer wrappers
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 125255) of binary: /usr/bin/python
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
tools/train.py FAILED
------------------------------------------------------------

Additional information

In the prepare functio in SingleDeviceStrategy, sclae_lr is called after building param_schedulers:

if optim_wrapper is not None:
self.optim_wrapper = self.build_optim_wrapper(optim_wrapper, model)
if param_scheduler is not None:
self.param_schedulers = self.build_param_scheduler(
param_scheduler, self.optim_wrapper)
if optim_wrapper is not None:
self._scale_lr()

Also, it seems we are unable to pass the train_micro_batch_size_per_gpu using strategy config to dispatch_kwargs. And a KeyError will occurred.

real_bs = self.world_size * self.dispatch_kwargs[
'train_micro_batch_size_per_gpu']
base_bs = self._auto_scale_lr['base_batch_size']
ratio = float(real_bs) / float(base_bs)

Hi @SCZwangxiao , thanks for your feedback. We will fix it ASAP.