Zj-BinXia/DiffIR

ValueError: `Dataloader` returned 0 length. Please make sure that it returns at least 1 batch

Closed this issue · 1 comments

Hello, I'm sorry to disturb your life. In DiffIR-inpainting with your own dataset, this error occurs. After trying, the problem was not found, and the following problems have been ruled out:

  1. There is no problem with the file path or format;
  2. There is no problem with the content of the dataset,;
  3. The problem of the size of the dataset is not very certain.
    Please do not hesitate to advise!thanks
    [saicinpainting.training.trainers.baseS1][INFO] - BaseInpaintingTrainingModule init done
    [root][INFO] - Added key: store_based_barrier_key:1 to store for rank: 0
    [INFO] - Make val dataloader default from /home/liu/ZZB/DH_2700//Val/Val_GT
    [main][CRITICAL] - Training failed due to Dataloader returned 0 length. Please make sure that it returns at least 1 batch:
    Traceback (most recent call last):
    File "bin/train.py", line 63, in main
    trainer.fit(training_model)
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
    self.dispatch()
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
    self.accelerator.start_training(self)
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
    self.training_type_plugin.start_training(trainer)
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
    self._results = trainer.run_train()
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in run_train
    self.run_sanity_check(self.lightning_module)
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 854, in run_sanity_check
    self.reset_val_dataloader(ref_model)
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 364, in reset_val_dataloader
    self.num_val_batches, self.val_dataloaders = self._reset_eval_dataloader(model, 'val')
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py", line 325, in _reset_eval_dataloader
    num_batches = len(dataloader) if has_len(dataloader) else float('inf')
    File "/home/liu/anaconda3/envs/DIFFIR/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py", line 33, in has_len
    raise ValueError('Dataloader returned 0 length. Please make sure that it returns at least 1 batch')
    ValueError: Dataloader returned 0 length. Please make sure that it returns at least 1 batch

The data seems to have not been read successfully. For more details on the data, you can refer to lama at: https://github.com/advimman/lama.