jtchen0528/PCL-I2G

Mask size error

Opened this issue · 1 comments

Thanks for your work! I'm trying to conduct experiments on the settings of
----------------- Options ---------------
batch_size: 32
beta1: 0.9
checkpoints_dir: ./checkpoints
display_freq: 1000
fake_class_id: 0
fake_im_path: ./dataset\DF
fineSize: 224
gpu_ids: [0] [default: 0]
help: None [default: ==SUPPRESS==]
init_type: xavier
isTrain: True [default: None]
lbda: 10
loadSize: 256
load_model: False
lr: 0.001
lr_policy: constant
max_dataset_size: inf
max_epochs: None
model: patch_inconsistency_discriminator
nThreads: 0
name: patch_inconsistency_discriminator_resnet34_layer4_extra3_size224 [default: ]
overwrite_config: True [default: False]
patience: 10
prefix:
print_freq: 100
real_im_path: ./dataset\original
results_dir: ./results/
save_epoch_freq: 100
save_latest_freq: 1000
seed: 0
suffix:
which_epoch: latest
which_model_netD: resnet34_layer4_extra3
----------------- End -------------------
However, during the training process, i run into this issuse:
File "D:\pythonProject\PCL-I2G\models\patch_inconsistency_discriminator_model.py", line 85, in compute_losses_D
masks = self.mask_down_sampling(masks).reshape(n, h, w)
RuntimeError: shape '[32, 14, 14]' is invalid for input of size 8192

Could you please give me some advice on how to solve this? Thank!

Sorry, I did not make this code/model runnable for all image sizes. The PCL model only takes the size suggest by the original paper, which is 256x256. The FineSize in your settings is 224, which will make the image size change to 224 instead of 256. So you should change your input to 256x256 so that the image can pass into PCL model smoothly.