BlinkDL/RWKV-LM

v5 train error

HaloKim opened this issue · 1 comments

Hi,
I tried to train for my custom dataset.

RWKV_MY_TESTING 
Using /root/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py38_cu117/wkv5/build.ninja...
Building extension module wkv5...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module wkv5...
INFO:pytorch_lightning.strategies.deepspeed:initializing deepspeed distributed: GLOBAL_RANK: 1, MEMBER: 2/2
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 1
INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:pytorch_lightning.utilities.rank_zero:Enabling DeepSpeed BF16.
INFO:torch.distributed.distributed_c10d:Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
Using /root/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Using /root/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py38_cu117/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_adam...
Time to load fused_adam op: 0.2275218963623047 seconds
Traceback (most recent call last):
  File "/workspace/RWKV-LM/RWKV-v5/train.py", line 309, in <module>
    trainer.fit(model, data_loader)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 1093, in _run
    self.strategy.setup(self)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 345, in setup
    self.init_deepspeed()
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 456, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 493, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 414, in _setup_model_and_optimizer
    deepspeed_engine, deepspeed_optimizer, _, _ = deepspeed.initialize(
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 124, in initialize
    engine = DeepSpeedEngine(args=args,
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 262, in __init__
    self._configure_with_arguments(args, mpu)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 883, in _configure_with_arguments
    self._config = DeepSpeedConfig(self.config, mpu)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config.py", line 797, in __init__
    self._initialize_params(self._param_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config.py", line 816, in _initialize_params
    self.zero_config = get_zero_config(param_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 66, in get_zero_config
    return DeepSpeedZeroConfig(**zero_config_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config_utils.py", line 55, in __init__
    self._deprecated_fields_check(self)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config_utils.py", line 82, in _deprecated_fields_check
    if field.field_info.extra.get("deprecated", False):
AttributeError: 'FieldInfo' object has no attribute 'field_info'
Loading extension module fused_adam...
Time to load fused_adam op: 0.30267786979675293 seconds
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/ddp.py:437: UserWarning: Error handling mechanism for deadlock detection is uninitialized. Skipping check.
  rank_zero_warn("Error handling mechanism for deadlock detection is uninitialized. Skipping check.")
Traceback (most recent call last):
  File "train.py", line 309, in <module>
    trainer.fit(model, data_loader)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
    call._call_and_handle_interrupt(
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 88, in launch
    return function(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/trainer/trainer.py", line 1093, in _run
    self.strategy.setup(self)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 345, in setup
    self.init_deepspeed()
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 456, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 493, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/strategies/deepspeed.py", line 414, in _setup_model_and_optimizer
    deepspeed_engine, deepspeed_optimizer, _, _ = deepspeed.initialize(
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 124, in initialize
    engine = DeepSpeedEngine(args=args,
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 262, in __init__
    self._configure_with_arguments(args, mpu)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 883, in _configure_with_arguments
    self._config = DeepSpeedConfig(self.config, mpu)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config.py", line 797, in __init__
    self._initialize_params(self._param_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config.py", line 816, in _initialize_params
    self.zero_config = get_zero_config(param_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 66, in get_zero_config
    return DeepSpeedZeroConfig(**zero_config_dict)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config_utils.py", line 55, in __init__
    self._deprecated_fields_check(self)
  File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/config_utils.py", line 82, in _deprecated_fields_check
    if field.field_info.extra.get("deprecated", False):
AttributeError: 'FieldInfo' object has no attribute 'field_info'

What can i do ?

solved it
pydantic downgrade
latest -> pydantic==1.10.13