philgras/neural-head-avatars

How to deal with artifacts? Such as eye/mouth/ear.

Closed this issue · 2 comments

using my own video, I got some results. Most of the outputs seems good, but there are still some questions.

  • In common sense, the action of eye close should not have an effect on the other region of the face.But in my case I found that there is some defomations in the mesh, for example the upper lip.
  • Sometimes, the algorithm will optimize an eye-closed frame to both eye-opened mesh and texture.
  • Processing the frames with eye rolling, the output look really weird.
  • By the way, with V100, I have to
    change the batchsize to 1 for running, meanwhile I adapt the learning rate to its 1/16, is there a problem with this method?
    Looking forward for your reply.

Thanks for your interest in our work.

  • These artifacts occur due to the incomplete disentanglement of the expression / pose blendshapes of the underlying 3DMM. They are known limitations to that method.
  • Our method relies on pretrained networks for facial landmark detection. Hence, when the predicted landmarks correspond to open eyes, our model will also optimize an avatar with opened eyes. We did not aim to improve the performance of these models but rather to design a model that is robust against noise in their predictions.
  • Decreasing the batch size and learning rate by the same factor is common practice and in general should yield similar results.

Hope that helps.

Thanks for your interest in our work.

  • These artifacts occur due to the incomplete disentanglement of the expression / pose blendshapes of the underlying 3DMM. They are known limitations to that method.

1、So mesh defomation when eye close are inevitable, aren't they?
Is there a solution to this problem?
2、Is there no eye close in 3DMM expression parameters(N x 100)? or eye close in 3DMM expression parameters is not study well?