datitran/face2face-demo

Extremely blurry result with distorted colours

wanshun123 opened this issue · 3 comments

I've tested on the Merkel video as in the readme, first testing on a model trained to 200 epochs and also on a model trained to 1000 epochs. Both models produce similarly poor results:

frame_screenshot_28 12 2018

My input video is a very high quality .mp4 video, though as can be seen the subject was definitely too close to the camera compared to the Merkel video. Any advice on improving results? I am wondering if there's any possibility of reproducing the extremely high quality example from the original face2face researchers (https://www.youtube.com/watch?v=ohmajJTcpNk).

Maybe you should sit at the same distance as the person in your dataset.

It is definitely necessary to sit at the exact same distance from the camera as the source being controlled, though I am still getting blurry results. I think to try to replicate face2face from https://www.youtube.com/watch?v=ohmajJTcpNk is outside the scope of this repo.

I think the part for generating the face landmarks is quite different for both but still my results were above average.
https://github.com/Atin17/Deep-Learning/blob/master/face2face/README.md