joannahong/Lip2Wav-pytorch

Training Dimension Mismatch

Closed this issue · 0 comments

torch.Size([18, 3, 20, 128, 128])
torch.Size([18, 20, 512, 2, 2])
Traceback (most recent call last):
File "train_multi.py", line 327, in
train(args)
File "train_multi.py", line 207, in train
encoder_outputs = encoder(embedded_inputs.cuda(), vid_lengths.cuda()) # [bs x 25 x encoder_embedding_dim]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/data_parallel.py", line 165, in forward
return self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/drive/MyDrive/Design_Project/Lip2Wav-pytorch/model/model.py", line 307, in forward
outputs, _ = self.lstm(x)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py", line 659, in forward
self.check_forward_args(input, hx, batch_sizes)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py", line 605, in check_forward_args
self.check_input(input, batch_sizes)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py", line 200, in check_input
expected_input_dim, input.dim()))
RuntimeError: input must have 3 dimensions, got 5

We have problem, While we are trying training. We encountered this issue . Can you help with this problem . We dont understand what did you do. You set vid_padded dim as 5 but in Encoder forward propagation want 3 as dimension.
vid_padded = torch.Tensor(bsz, 3, max_input_len, self._hparams.img_size, self._hparams.img_size) #todo