XingangPan/deep-generative-prior

Cannot reproduce the image reconstruction results (Table 1 in the paper).

PeterouZh opened this issue · 4 comments

Hi,
Thank you for your great work.
Could you share the training script for the image reconstruction experiment? I use the hyper-parameters of colorization for the image reconstruction experiment. But the results seem to be not good. On the ImageNet 1k val dataset, I only achieve psnr of 25.69 and ssim of 85.12, which are inferior to the results reported in the paper (psnr of 32.89, ssim of 95.95). Could you help me?

@PeterouZh Hi, thanks for your interest. For the reconstruction experiment, we train for more iterations and use higher loss weight for MSE loss. The trainings script is as below:

python -u -W ignore main.py
--dist
--port 20041
--exp_path $work_path
--root_dir /mnt/lustre/share/images/val
--list_file scripts/list/val_1000.txt
--dgp_mode reconstruct
--update_G
--ftr_num 8 8 8 8 8
--ft_num 2 3 4 5 6
--lr_ratio 1 1 1 1 1
--w_D_loss 1 1 1 0.1 0.1
--w_nll 0.02
--w_mse 10 10 10 100 100
--select_num 500
--sample_std 0.3
--iterations 200 200 200 200 200
--G_lrs 2e-4 2e-4 1e-4 1e-4 1e-5
--z_lrs 1e-3 1e-3 1e-4 1e-4 1e-5
--use_in True True True True True
--dataset I128
--weights_root pretrained
--load_weights 128
--cbn
--G_B2 0.999
--G_attn 64 --D_attn 64
--G_ch 96 --D_ch 96
--G_nl inplace_relu --D_nl inplace_relu
--SN_eps 1e-6 --BN_eps 1e-5 --adam_eps 1e-6
--G_shared
--G_init ortho --D_init ortho --skip_init
--hier --dim_z 120 --shared_dim 128
--seed 0
--skip_init --use_ema

Hope this helps.

Thanks very much!

Hi, there,

There seems to be a problem with the above script. The argument, resolution=128, is missing. Otherwise, a 256x256 generator is created by default.

@PeterouZh Thanks for pointing it out. This script corresponds to an older version of the code, thus has a few differences in arguments. You may revise it accordingly.