the artifact during training
Closed this issue · 3 comments
Hello, thank you for your interest in our work.
Did you generate these images by yourself?
Please double check that you clip the output of the generator, as we do in the different manipulation methods:
(manipulated_img.clamp(min=0, max=1).permute(1, 2, 0).cpu().numpy() * 255).astype(np.uint8)
Let me know if this solves the problem.
It did solved the issue by your method. I'm really appreciate for your help and your work. The face aging results are very amazing, but I still found some minor defects. Maybe they're not any problems.
-
First, the generated results of overlord are not as definite as the original stylegan2 model, it seems a little visually blur.
-
Second, when I use some wild images(with crop and alignment) as input, rather than FFHQ, the synthetic face will no longer keep the same identity as the input face after age transformation, just look like another person.
I think the possible reasons are as follows:
The uncorrelated encoder change the input to a 256-dimensional feature vector, maybe lots of information has been lost, such as the identity info. Have you ever tried to use the feature map (256x4x4) by the uncorrelated encoder as the constant input of the stylegan, I'm not sure whether it will work or bring some other issues.
Hope for your reply!
Wish you happy new year! 😊😊😊
Glad to hear that the artifacts have disappeared.
Regarding the in-the-wild images, the results we show (e.g. Will Smith, Andrew Ng and more) were not present in FFHQ.
However, I agree that the identity is not perfectly preserved in many cases.
The objective of the work was developing a principled disentanglement framework without forming architectural biases of the form you mentioned.
If you are interested specifically in human face editing, I guess the results can be improved by introducing a supervised face-identity loss term during training.
Good luck!