felixrosberg/FaceDancer

Swapped-face is just the same as target-face in some case

onlyhantenghao opened this issue · 4 comments

Hello @felixrosberg ! Thanks for your great work! FD has got a wonderful result on my most test-samples, but somehow a very strange problem has occurred:

  • i got a pair of inputs, so call src-face and target-face , After processing by FD, then i got swapped-face, but swapped-face was just same as target-face,just like these:[src-face,target-face, swapped-face]
    20191004011757139-- yangmi
    dilireba-- gulinazha

I'm so confused why FD failed. Obviously, src-face and target-face are not a same person, src-face should not have got the same arcface-id-embedding with target-face.

  • here are some success samples with the same src-face
    20191004011757139-- dilireba
    dilireba-- taylor

If you need it, here are the test-samples I used: https://drive.google.com/drive/folders/1xYSU2glzsORyr03yteVx08wjdom9VnTP?usp=sharing
Looking forward to your reply, thanks!

Hi @onlyhantenghao ,

Thank you! My initial guess was that there could have been a bias towards asian people in the data, causing the FaceDancer to struggle with asian faces. But when I tested with a different source face it seem to work. So my second thought is that FaceDancer has learned to ignore manipulation if the source and target is the same face (it makes percpetually perfect reconstructions if source and target is the same image), and that it somehow consider your images to be same identity. This is pure speculation by the way. The reason for this could also be bias towards asian faces.

So I am kinda confused as well. I will try play around with this more. If my speculation is correct, a "simple" fix would be fine tuning on a data set that represents asian people better.

Hi @onlyhantenghao ,

Thank you! My initial guess was that there could have been a bias towards asian people in the data, causing the FaceDancer to struggle with asian faces. But when I tested with a different source face it seem to work. So my second thought is that FaceDancer has learned to ignore manipulation if the source and target is the same face (it makes percpetually perfect reconstructions if source and target is the same image), and that it somehow consider your images to be same identity. This is pure speculation by the way. The reason for this could also be bias towards asian faces.

So I am kinda confused as well. I will try play around with this more. If my speculation is correct, a "simple" fix would be fine tuning on a data set that represents asian people better.

Thanks for your reply, I quite agree with your thoughts. Recently I calculated the arcface embedding cosine similarity between src and target in my test samples:

  • In most successful cases, the similarity of image pairs is lower than 0.1 and close to 0;
  • The similarity of the above failed cases is basically maintained at 0.2-0.4;
  • The similarity of two different photos of the same person is basically greater than 0.5;

What I'm trying to say is, is it possible that the bias exists in the arcface instead of the FD?

  • If you are using a different version of pretrain arcface under pytorch version, I wonder if you will find the same problem or not.

Yes! The bias can indeed be in the ArcFace instead.

Now, it was trained a long time ago using a now outdated data set. I believe ArcFace provided by InsightFace would not have this problem (or at least significantly less of it). So it would probably be ideal to retrain FD in PyTorch using ArcFace provided by InsightFace. Now as mentioned in the README, I am having problem reproducing the FD in PyTorch, the identity loss seem to not want to converge. I have had some success when omitting the mapping network, which is annoying as it improved the result in TensorFlow.

Yes! The bias can indeed be in the ArcFace instead.

Now, it was trained a long time ago using a now outdated data set. I believe ArcFace provided by InsightFace would not have this problem (or at least significantly less of it). So it would probably be ideal to retrain FD in PyTorch using ArcFace provided by InsightFace. Now as mentioned in the README, I am having problem reproducing the FD in PyTorch, the identity loss seem to not want to converge. I have had some success when omitting the mapping network, which is annoying as it improved the result in TensorFlow.

I'm glad to hear that you're making progress on the pytorch implementation! thanks again!