ZZUTK/Face-Aging-CAAE

Training gets overfiting

haithanhp opened this issue · 12 comments

Hi @ZZUTK ,

Thanks for your great work. I have trained on UTKFace with configurations as follow:
self.loss_EG = self.EG_loss + 0.0000 * self.G_img_loss + 0.01 * self.E_z_loss + 0.0000 * self.tv_loss
The epoch result for sample is good, but when I try some test images. The results are not good.

Could you tell me exactly number you chose to get your results?

Thanks,
Hai

Input:
input

Output:
test_as_female

Hi Hai,

If you set the parameter for self.G_img_loss to be 0.000, you only trained the autoencoder. You may set it to be 0.0001 after the autoencoder is stable.

Hi @susanqq ,

Thank for your reply. I tried that configuration, but I still get overfitting. What is your actual configuration to get the results?

Hi Hai,

The results published in the original paper are trained on a large face aging dataset where UTKFace is one part. Due to the copyright, we cannot provide the other dataset. All the faces are aligned with some affine transformation. If you want to further improve the performance, you can add a mask to remove the background.
The best parameter may also depend on the dataset. We set lambda to be 0.0001. Can you provide the autoencoder result and the result after adding the adversarial loss? Your results somehow indicate that the adversarial loss doesn't help. It should be sharper comparing with the autoencoder results.

Hi,
The result of autoencoder is just like mean faces when I set zeros for all weights of loss terms. I try this to change : self.loss_EG = self.EG_loss + 0.0001 * self.G_img_loss + 0.01 * self.E_z_loss + 0.0000 * self.tv_loss

By the way, I am trying to reproduce your result to compare with our team 's approach for next research papers. Would you mind sharing with me your good models as well as the images testing code? If Ok, you can send me through my email: pthai1204@gmail.com. I really appreciate if you can help.

Here is the result I try for test as male:
test_as_male

I think if you set the std for all the weight init value to be 0.02 can solve this problem.

change the code related to weigth initalizer:
for example:
initializer=tf.truncated_normal_initializer(stddev=0.02)

@susanqq how can i train the autoencoder to be stable? Does it mean we will train
the model step by step ,step by step change the weight of the gan loss?

@shz1314 Stability here means you can generate good (but blurry) reconstructed images which are quite similar to the given input. You can then increase the lambda to generate more shape results. It is not necessary to train them seperately. However, in practice, we noticed that it is easy and stable to train the autoencoder first, and then add the adversarial loss.

Thank you very much for your reply,i have a new problem ,when i use Local Binary Patterns as a loss function,the result doesnt get any better.i want to do some work in texture Information.
If you can give me some advice I would be very grateful.

Hi @susanqq
and when i change batchsize,testing model always haves problems,do you have other code which can be use for other batchsize.my email is 981449149@qq.com .

Thanks,
Sun

Hi HaiPhan,
Did you manage to get suitble testing results?

Hello @susanqq
I am interested in your approach and trying to reproduce it. but I found I have the same bad result as @HaiPhan1991 due to lack of dataset as I only your UTKFace dataset, could you please share your trained model to me? na.li2@ucdconnect.ie

Thank you in advance.