jtchen0528/PCL-I2G

About masks when learning with FF++

Takase-Syunki opened this issue · 6 comments

Hello, thanks for your code implementation.
Please tell me how to prepare the mask when training on the FF++ dataset?

According to the paper, the mask is generated by the convex hull of 68 facial landmarks on a face (Sec 3.2 I2G). In data/I2G_dataset.py, I randomly select 32 frames (Sec 4.1), and get their landmark convex hull.
That's how I prepare the mask. I believe its in the I2G generation code.

I’m sorry.
I2G How do I prepare a mask for training using "real fake" images (fake data from FF++ dataset) instead of fake images?
Also, what is the process for this?

Oh, I misunderstood.

I did that by detecting faces in the original videos. FF++ fake videos are composed of 2 real videos. FF++ does list out the background videos and the videos with the swapped faces. Those videos are listed in one csv file (as I remembered, maybe I'm wrong), or you can see the filenames of the fake videos as XXX_XXX.mp4. One of them is the background video.

I detected the face landmarks in the background video and generated the mask with it. The code might not be in this github, sorry about that. So I can get accurate masks for the "real fake" images.

Thank you very much.
I see. So that's how you do it.
I'm sorry, but could you upload that code to this github?

Well, I kinda lost the code, but it is very similar to the code in I2G generation. Use the part where I detect face landmark and crop out convex hull.

I see.
Thank you very much.
I kind of understand how to do it.