luping-liu/PNDM

Training time

zzw-zwzhang opened this issue · 6 comments

Hi, thanks for your interesting work.

What is your training time on each dataset?

In our paper, we use pre-trained models from other papers.
We also train new models by ourselves and find maybe 24h is enough for cifar10 using 2 RTX3090s and set batch_size 256.

Thank you for your reply.

I use the default DDIM code to train the cifar with 80w steps, but fid is 187. I will try your code again.

Hi, I retrained the CIFAR10 based on your default parameters, but the reproduced result is worse than your provided model. For example,
DDIM 50 steps: 6.1640 vs. 14.3992

Python main.py --runner train --device cuda --config ddim_cifar10.yml --method DDIM --train_path temp/train

Can you provide any suggestions?

All models we provide are from other public repo.

  1. Maybe you can connect with the author of DDIM for more information?
  2. Maybe you can change the loss type from linear to square and try again similar to DDIM?

Thanks for your reply. I will try it again.

Before that, I had trained the model based on the DDIM code, but it failed.
ermongroup/ddim#6 (comment)

Perhaps you can refer to my other code DiffOOD, which contains a complete training code. I get FID 3.2 on CIFAR10 by myself.