uzh-rpg/RVT

AssertionError: assert time.size == 1 during training.

Closed this issue · 1 comments

Thanks for wonderful job! I'm having some trouble training my own dataset using RVT, if you could help that would be great!
I configured my dataset according to the data format required by RVT, and the file structure under each sequence looks like this:
未命名图片
My training command is:
python3 train.py model=rnndet dataset=gen4 dataset.path='/root/data1/dataset/RVT' wandb.project_name=RVT wandb.group_name=1mpx +experiment/gen4="default.yaml" hardware.gpus=[0,1] batch_size.train=2 batch_size.eval=2 hardware.num_workers.train=2 hardware.num_workers.eval=2
But this error occurs during training, it looks like loaded_labels contains all the labels of a sequence:

 Traceback (most recent call last):
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1098, in _run
    results = self._run_stage()
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1177, in _run_stage
    self._run_train()
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1200, in _run_train
    self.fit_loop.run()
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 214, in advance
    batch_output = self.batch_loop.run(kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(optimizers, kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 200, in advance
    result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 247, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 357, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1342, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 1661, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 169, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 281, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 234, in optimizer_step
    return self.precision_plugin.optimizer_step(
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 85, in optimizer_step
    closure_result = closure()
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 147, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 133, in closure
    step_output = self._step_fn()
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 406, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1480, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 352, in training_step
    return self.model(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])  # type: ignore[index]
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch171/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 98, in forward
    output = self._forward_module.training_step(*inputs, **kwargs)
  File "/root/data1/code/RVT-master/modules/detection.py", line 180, in training_step
    loaded_labels_proph, yolox_preds_proph = to_prophesee(obj_labels, pred_processed)
  File "/root/data1/code/RVT-master/utils/evaluation/prophesee/io/box_loading.py", line 79, in to_prophesee
    assert time.size == 1
AssertionError

I printed the time.size, the sequence and time node where the error occurs is not the same for every training:
未命名图片2
未命名图片3
未命名图片1

I'll show you the data format of the sequence in which the error occurred, could you please help me to see if there is any problem
未命名图片3
For labels.npz and timestamps_us.npy, the shape of labels, objframe_idx_2_label_idx, timestamps_us is (8247,) (1401,) and (1401,).
labels:

[(  100000, 411. , 49.875, 85.5, 188.625, 0, 1., 0)
 (  150000, 413.5, 50.25 , 82. , 188.625, 0, 1., 0)
 (  200000, 413.5, 50.25 , 81.5, 188.25 , 0, 1., 0) ...
 (70000000, 192. , 94.125, 80. , 117.   , 0, 1., 5)
 (70050000, 192. , 93.75 , 76.5, 116.625, 0, 1., 5)
 (70100000, 194. , 93.   , 70. , 116.25 , 0, 1., 5)]

objframe_idx_2_label_idx:
[ 0 1 2 ... 1398 1399 1400]
timestamps_us:
[ 100000 150000 200000 ... 70000000 70050000 70100000]

For event_representations, the shape of event_representations_ds2_nearest.h5, objframe_idx_2_repr_idx.npy and timestamps_us is (1401, 5, 480, 640) (1401,) and (1401,).
objframe_idx_2_repr_idx.npy:
[ 0 1 2 ... 1398 1399 1400]
timestamps_us:
[ 100000 150000 200000 ... 70000000 70050000 70100000]

the frame_rate of the sequence is 20 hz, and I removed the first frame.