FID in Cifar10.
RachelTeamo opened this issue · 2 comments
Thank you to the author for your work. Compared with opanai's code, your code adds more details and solves many of my previous problems. Thank you. But when I used your open source ADM_cifar10_baseline.pt model, the final fid calculation result was 3.38, which is still far behind the 2.99 in your paper. Below is the script I generated. Are there any questions?
python3 scripts/image_sample.py \
--image_size 32 --timestep_respacing 100 \
--model_path ./ckpt/ADM_cifar10_baseline.pt \
--num_channels 128 --num_head_channels 32 --num_res_blocks 3 --attention_resolutions 16,8 \
--resblock_updown True --use_new_attention_order True --learn_sigma True --dropout 0.3 \
--diffusion_steps 1000 --noise_schedule cosine --use_scale_shift_norm True --batch_size 256 --num_samples 50000
My final result is
Inception Score: 9.63854694366455
FID: 3.386766967519236
Thanks again.
Hi @RachelTeamo , your 3.39 FID is correct because you were using 100 sampling sites (--timestep_respacing 100)
2.99 FID was obtained by 1000 sampling steps (see Table 3 in our paper)
Thank you for the replay. I tried the following code to your suggestions, the result is fid=3.011 which is quite similar to the 2.99 in Table 3. Thanks again~
--image_size 32 --timestep_respacing 1000 \
--model_path ./ckpt/ADM_cifar10_baseline.pt \
--num_channels 128 --num_head_channels 32 --num_res_blocks 3 --attention_resolutions 16,8 \
--resblock_updown True --use_new_attention_order True --learn_sigma True --dropout 0.3 \
--diffusion_steps 1000 --noise_schedule cosine --use_scale_shift_norm True --batch_size 512 --num_samples 50000 \
--sample_dir ./cifar_sample