Train getting NoneType for input to forward during train
Opened this issue · 0 comments
When trying to train fs_vid2vid on my own data after running images through openpose, etc. and creating lmdb files, I get this error. I do not know why reading in the data and preparation to pass through network (forward) is causing imaginaire code to have the input as NoneType rather than Tensor. See my config file and example of LMDB files and directories containing the processed data used to create LMDB files (one for train, and one for validation).
`/content/drive/My Drive/imaginaire
Using random seed 2
Training with 1 GPUs.
Make folder logs/2021_1111_0029_09_ampO1
cudnn benchmark: True
cudnn deterministic: False
LMDB ROOT ['dataset/train']
Creating metadata
['human_instance_maps', 'images', 'poses-openpose']
Data file extensions: {'images': 'jpg', 'poses-openpose': 'json', 'human_instance_maps': 'png'}
Searching in dir: images
Found 336 sequences
Found 12934 files
Folder at dataset/train/images opened.
Folder at dataset/train/poses-openpose opened.
Folder at dataset/train/human_instance_maps opened.
Num datasets: 1
Num sequences: 336
Max sequence length: 40
Epoch length: 336
LMDB ROOT ['dataset/val']
Creating metadata
['human_instance_maps', 'images', 'poses-openpose']
Data file extensions: {'images': 'jpg', 'poses-openpose': 'json', 'human_instance_maps': 'png'}
Searching in dir: images
Found 40 sequences
Found 1524 files
Folder at dataset/val/images opened.
Folder at dataset/val/poses-openpose opened.
Folder at dataset/val/human_instance_maps opened.
Num datasets: 1
Num sequences: 40
Max sequence length: 40
Epoch length: 40
Train dataset length: 336
Val dataset length: 40
Using random seed 2
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input image: 3
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Concatenate poses-openpose:
ext: json
num_channels: 3
interpolator: None
normalize: False
pre_aug_ops: decode_json, convert::imaginaire.utils.visualization.pose::openpose_to_npy
post_aug_ops: vis::imaginaire.utils.visualization.pose::draw_openpose_npy
computed_on_the_fly: False
is_mask: False for input.
Concatenate human_instance_maps:
ext: png
num_channels: 3
is_mask: True
normalize: False
computed_on_the_fly: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input label: 3
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input image: 3
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input image: 3
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input image: 3
Initialized temporal embedding network with the reference one.
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Concatenate poses-openpose:
ext: json
num_channels: 3
interpolator: None
normalize: False
pre_aug_ops: decode_json, convert::imaginaire.utils.visualization.pose::openpose_to_npy
post_aug_ops: vis::imaginaire.utils.visualization.pose::draw_openpose_npy
computed_on_the_fly: False
is_mask: False for input.
Concatenate human_instance_maps:
ext: png
num_channels: 3
is_mask: True
normalize: False
computed_on_the_fly: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input label: 3
Concatenate images:
ext: jpg
num_channels: 3
normalize: True
computed_on_the_fly: False
is_mask: False
pre_aug_ops: None
post_aug_ops: None for input.
Num. of channels in the input image: 3
Initialize net_G and net_D weights using type: xavier gain: 0.02
Using random seed 2
net_G parameter count: 91,147,294
net_D parameter count: 6,292,963
Use custom initialization for the generator.
Setup trainer.
Using automatic mixed precision training.
Augmentation policy:
GAN mode: hinge
Perceptual loss:
Mode: vgg19
Loss GAN Weight 1.0
Loss FeatureMatching Weight 10.0
Loss Perceptual Weight 10.0
Loss GAN_face Weight 10.0
Loss FeatureMatching_face Weight 10.0
Loss Flow Weight 10.0
Loss Flow_L1 Weight 10.0
Loss Flow_Warp Weight 10.0
Loss Flow_Mask Weight 10.0
TRAIN DATASET := <imaginaire.trainers.fs_vid2vid.Trainer object at 0x7f13b219cfa0>
No checkpoint found.
Epoch 0 ...
Epoch length: 336
------ Now start training 4 frames -------
Traceback (most recent call last):
File "/content/drive/My Drive/imaginaire/train.py", line 169, in
main()
File "/content/drive/My Drive/imaginaire/train.py", line 141, in main
trainer.gen_update(
File "/content/drive/My Drive/imaginaire/imaginaire/trainers/vid2vid.py", line 254, in gen_update
net_G_output = self.net_G(data_t)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/My Drive/imaginaire/imaginaire/utils/trainer.py", line 195, in forward
return self.module(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/My Drive/imaginaire/imaginaire/generators/fs_vid2vid.py", line 151, in forward
self.weight_generator(ref_images, ref_labels, label, is_first_frame)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/My Drive/imaginaire/imaginaire/generators/fs_vid2vid.py", line 586, in forward
self.encode_reference(ref_image, ref_label, label, k)
File "/content/drive/My Drive/imaginaire/imaginaire/generators/fs_vid2vid.py", line 644, in encode_reference
x_label = self.ref_label_first(ref_label)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/My Drive/imaginaire/imaginaire/layers/conv.py", line 142, in forward
x = layer(x)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (NoneType, Tensor, Parameter, tuple, tuple, tuple, int), but expected one of:
- (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (NoneType, Tensor, Parameter, tuple, tuple, tuple, int) - (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (NoneType, Tensor, Parameter, tuple, tuple, tuple, int)`