bryandlee/FreezeG

How to use my own dataset to make Face2mydataset

abelghazinyan opened this issue · 5 comments

How to use my own dataset to make Face2mydataset

Hi, you can simply do

python train.py --finetune_loc 3 --ckpt [PATH_TO_PRETRAINED_FFHQ_MODEL]

However, an end-to-end image translation is not possible at the moment.

Thanks
Then what that finetuning does, if not end-to-end image trasaltion.
I can't understand for example how your Face2Art works.
I want to have a face as an input and get it in my datasets style.

Finetuning will update the last few layers of the pre-trained generator. For style-based generators, each layer learns different levels of abstract features where the front layer determines the overall geometric structure of the image, and the last few layers are involved in the low-level style of the image such as color or texture. By fixing the early layers and only fine-tuning the last layers, the low-level style of the generated images could be altered while the overall geometric structure and semantics are preserved. The more layers you finetune, the less original information will be preserved. For example, I finetuned only the last two layers of the FFHQ-trained generator for the Face2Art.

The side-by-side images show two samples generated from the original and finetuned models. To translate the image, you would have to embed the target image in the original model's latent space and then generate the translated image using the latent vector and finetuned model.

How to embed the target image in the original model's latent space?

projector.py will do that