SelfishGene/SFHQ-dataset

Could you please share e4e encoder parameters

tommy-qichang opened this issue · 4 comments

Hi there,

Thank you so much for sharing such greate dataset. It will help the whole community for sure.
Am I just wondering if you could share your trained e4e model parameters?

Thanks

Can you explain what you mean by parameters?
I have not retrained the e4e model, but rather used the pre-trained model from the original repo

Hi,
You mentioned:
Each inspiration image was encoded by encoder4editing (e4e) into [StyleGAN2] latent space (StyleGAN2 is a generative face model tained on FFHQ dataset and multiple candidate images were generated from each inspiration image.
I presume that means you are using e4e to find the latent code from the inspiration images to the synthetic real person image using StyleGAN(pretrained model)?

So my question is where can I find the e4e pretrained model that find the cartoon AAHQ image's latent code that could generate the corresponding synthetic real face image.

There is nothing special here in my e4e encoder or generator.
I'm using the original StyleGAN2 generator as is and original e4e encoder as is
You can find the pretrained e4e here: https://github.com/omertov/encoder4editing
You can find the pretrained stylegan2 here: https://github.com/NVlabs/stylegan2-ada-pytorch

Here is a short twitter thread to explain the process:
https://twitter.com/DavidBeniaguev/status/1376020024511627273?s=20&t=kH9J5mV9hL8e3y8PruuB5Q

Oh. got it. Thank you so much.