ningyu1991/InclusiveGAN

Question regarding the FID score of the StyleGAN2 baseline in the paper

Closed this issue · 7 comments

Dear authors,

Really impressive work in solving the mode-dropping problem in GANs!

I am trying to reproduce some of the experimental results shown in Table 2. in the paper, specifically, the FID score of the StyleGAN2 baseline. I implemented a training/validation separation mechanism into the official StyleGAN2 codebase, run with the config-f (also --mirror-augment=true and --total-kimg=25000 [default value]) of the StyleGAN2, and yields a 5.xx FID score. Furthermore, the number seems quite consistent across multiple runs, and it seems like the number could have been further improved if some of the hyperparameters are finely tuned. I'm a bit confused if I have missed anything or you applied other StyleGAN2 configurations (other than the default config-f)?

Looking forward to your comments and code release!
Sincerely thanks!

Hi Hubert,

Thank you for your interest in our work!

When reporting the results, I was using --config=config-e-Gskip-Dresnet and using only the first 30k CelebA images for training, ascribed to the limit of our computing resources.

Hope that would be clear!

Thank you,

Ning

Hi Ning,

Thanks for the reply!

I'm still a bit confused. Do you mean the resource is constrained by the nearest neighbor searching algorithm within the latent space, so that training with the full CelebA dataset would be infeasible?

Hi Hubert,

I meant the StyleGAN2 training time (until convergence) seems proportional to the data size. Due to the GPU availability before a deadline, we only tried on a subset of CelebA to accelerate validation. Nearest neighbor searching does not put a constraint in our pipeline. For time complexity, Prioritized DCI enables time to grow logarithmically w.r.t. data size. For space complexity, you can consider random projection to project the feature or pixel information of each training image into a lower dimension, which does not affect Prioritized DCI much.

Thank you,

Ning

Hi Ning,

Thanks for the clarification! Will there be updated numbers for the models training with the full CelebA dataset in the future?

Sincerely thanks!

Hubert

Hi Hubert,

I am afraid it won't be in the very near future to-do list. Feel free to consider this new work of improving GAN performance given limited data. They've shown significant improvement in the data size of 30k on FFHQ and LSUN Cat.

Thank you,

Ning

Hi Ning,

Thanks for your patience and instructions!
Sorry for the last question, is there an expected date for the code release? Can't wait to see the implementation details!

Hubert

Hi Hubert,

Sure thing. The code will be ready before ECCV starts.

Thank you,

Ning