khorrams/c3lt

When using ImageNet and BigGAN, why an encoder is not required?

Closed this issue · 3 comments

In the paper, it says "Here, we sample x directly from the latent space distribution z_x ∼ N (0, I) with truncation 0.4 and an encoder E is not required."

I do not understand why an encoder is not required when provided a pertained BigGAN. It still needs to use the BigGAN and a corresponding encoder to train the c3lt, right?

After all, thank you for your excellent work!

In the paper, it says "Here, we sample x directly from the latent space distribution z_x ∼ N (0, I) with truncation 0.4 and an encoder E is not required."

I do not understand why an encoder is not required when provided a pertained BigGAN. It still needs to use the BigGAN and a corresponding encoder to train the c3lt, right?

I consider that, in this experiment, no more dataset is used. They just use c3lt to find counterfactual image for each image BigGAN generates. So they don't need an encoder to find a latent vector for any image.

In the paper, it says "Here, we sample x directly from the latent space distribution z_x ∼ N (0, I) with truncation 0.4 and an encoder E is not required."
I do not understand why an encoder is not required when provided a pertained BigGAN. It still needs to use the BigGAN and a corresponding encoder to train the c3lt, right?

I consider that, in this experiment, no more dataset is used. They just use c3lt to find counterfactual image for each image BigGAN generates. So they don't need an encoder to find a latent vector for any image.

Thank you very much! I understand now.