AIRI-Institute/HairFastGAN

RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 3

Closed this issue · 2 comments

RuntimeError Traceback (most recent call last)
in <cell line: 8>()
6 shape_path = '/content/HairFastGAN/womancropped512x512.png'
7 color_path = '/content/HairFastGAN/mr.png'
----> 8 final_image = hair_fast.swap(face_path, shape_path, color_path)
9 # T.functional.to_pil_image(final_image).resize((512, 512)) # 1024 -> 512
10 import torchvision.transforms as transforms

11 frames
/content/HairFastGAN/models/FeatureStyleEncoder/pixel2style2pixel/models/stylegan2/model.py in insert_feature(x, layer_idx)
527 def insert_feature(x, layer_idx):
528 if features_in is not None and features_in[layer_idx] is not None:
--> 529 x = (1 - feature_scale) * x + feature_scale * features_in[layer_idx].type_as(x)
530 return x
531

RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 3

Input images need to be 1024x1024

Input images need to be 1024x1024

Alternatively, you can simply use the 'align' flag. This flag, in addition to cropping images by face, will also resize the image to a resolution of 1024. Here's how you can use it:

final_image = hair_fast.swap(face_path, shape_path, color_path, align=True)