sanghyun-son/EDSR-PyTorch

I got this ERROR: RuntimeError: stack expects each tensor to be equal size

Leonacelli opened this issue · 5 comments

屏幕截图 2021-12-15 185420
Anyone know what's wrong?

Making model...
Preparing loss function:
1.000 * L1
/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:417: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr().
"please use get_last_lr().", UserWarning)
[Epoch 1] Learning rate: 1.00e-4
Traceback (most recent call last):
File "/content/EDSR-PyTorch/src/main.py", line 33, in
main()
File "/content/EDSR-PyTorch/src/main.py", line 27, in main
t.train()
File "/content/EDSR-PyTorch/src/trainer.py", line 42, in train
for batch, (lr, hr, _,) in enumerate(self.loader_train):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 84, in
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 0, 96] at entry 0 and [3, 96, 0] at entry 1

Seems like a "batch-size" problem, but I don't know how to fix it.
I try to fix it by using "transforms.Resize() ", but failed.

HI, I met same question as you. Are you Chinese?

Seems like a "batch-size" problem, but I don't know how to fix it. I try to fix it by using "transforms.Resize() ", but failed.

Hi !Have you solved the problem?
I met the same problem.

Hello may I ask how did you solve it? @Leonacelli