crywang/RFM

Instructions for setting up the project

enquestor opened this issue · 3 comments

Hello, I'm very interested in your work, and I'm trying to get your project running.

I would like to know more details about the project, for example what modules are needed and where the three datasets as well as the model should be placed relative to the project root folder. I'd also like to know if any pre-processing should be done for the datasets.

It would also be nice if you could provide a Dockerfile with the official PyTorch image as a base image, or if I could somehow get it running, I could make one and commit it so that others that are interested could spin up the project easily!

Thanks for your attention.

  1. You must install pretrained-models before running our project.
  2. The data_profile is just an example, and you can organize data or model in any way you want.
  3. The three datasets are composed of real and fake faces in the form of video. Therefore, you have to convert all the videos to picture form for subsequent processing. For each frame in the video, you should 1) detect face, 2) obtain face box, and 3) extract the face according to the face box. Each face is saved in PNG format with size of 256 × 256.
  4. Thanks for your suggestion. We may consider publishing the project through Dockerfile in the future.

Thanks to your instructions, I have been able to successfully run and train your model.

One small problem I ran into is, the scripts seems to stop after the final batch, and doesn't save the weights for it, while it saves successfully on previous checkpoints. I haven't had time to look into it, but it has happened 2 out of 2 times when training with both DFDC/Celeb-DF.

Anyways, thanks for your great work!

In our script, we only save models that achieve better performance than before.