VISION-SJTU/RECCE

pretrained model

GANG370 opened this issue · 25 comments

Very interested in your work! How to train my own dataset or can you provide some pre training models on ff++ or wilddeepfake? thanks a lot

No such file or directory: 'path/to/config.yaml when train,is it miss something?

path/to/config.yaml should be set as a specific file location, e.g., config/Recce.yml

I retrained the model on FF++ (c23) and you can access the model parameters via this link.
(Password: gn4Tzil#)

Thanks for your reply, According to the meaning of the paper, do you only need real pictures when training? Is my theory wrong? If I need to train my own data set, what should I do?

Hi, the inputs to the network contain both real and fake images. The main idea is to compute the reconstruction loss for real images only, aiming to learn the common representations of real samples. The network requires fake samples to learn discriminative features.

For training with your own dataset, you should define a custom dataloader that returns RGB images and binary labels. You may refer to the provided dataloaders under dataset/ directory and modify the code.

Hi, the shared model parameter link is invalid. Can you share another one? Thank you very much ! :)

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link.
(Password: 7v+MRf8L)

Thank you for your reply!I tested with my test.py based on your provided re-trained weights, but got about 86% AUC in FF++ c40. I random sample one frame in each video of the test dataset, and then generate the frame-level result. I wonder if the difference in less than ideal results is caused by frame-level and video-level. Can you give me some advice?

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images.
If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I have the same problem when using the provided checkpoints to test.

When I used your provided pretrained FF++ weights (C40 version) to test the FF++ test data, I only got the 86.75 of AUC (see attached image). In fact, this results is similar to my own retraining model performance.

I didn’t change the codes except from the dataloader, which I used my own pickle file. I also used your provided face crop files in #1 to preprocess the video frames. So, I don’t know what reasons cause this problem, and maybe it is resulted from the FF++ dataset preprocessing.

Hence, would you like to share the file of your data processing or your preprocessed FF++ dataset for reference? My email is chnzm366aq@163.com. Thank you!

1663250711216

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link.
(Password: 7v+MRf8L)

Hi, this link seems to be broken, could you update it? Thanks

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you very much!! My email is 21112025@bjtu.edu.cn.

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hello, I have tried using the provided code to test the generalization performance on FF++(c40), but the results show a sharp performance drop (generally around 0.55). Considering that the only difference is the dataset, I hope you can provide a copy of the FF++ dataset. Thank you very much! My email is beauding@foxmail.com.

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,I'm really interested in your work. Can you share your dataset with me? My email is hxSng@outlook.com

Hi, Thank s for your job, Can you share your dataset with me? My email is c_z_chao@163.com

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, Very interested in your job. but, I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is ruhangs@163.com. Thank you!

Hi,Hi, Thank s for your job, It's also hard for me to reproduce the results . Can you share your dataset with me? My email is ctl-123-me@163.com

Hello,

Thank you for your excellent work. Could you also share your preprocessed data with me? My mail is: husseinisahar1@gmail.com

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I'm very interested in your work. But I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is paulwang333@gmail.com. Thank you!

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you for your excellent job, and I would appreciate it if you could provide the preprocessed dataset. My email is: zhongjiannjupt@gmail.com

Hello Thank you for the work.
Could you please also send me the preprocessed dataset and pretrained model please?
My email address is ying.xu@ntnu.no
Thank you!

Thank s for your great job! It's also hard for me to reproduce the results . Can you share your dataset with me? My email is 872122623@qq.com

Hello. Thank you for your fantastic work. Could you also share your preprocessed data with me? My mail is: lxin0411@126.com
Thank you a lot!!!!

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,thank you for your code,could you share your preprocessed FF++ data with me ? my email is lntu_llq@163.com

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, thank you for your code, can you share the dataset with me. I really appreciate it. My email address is voyagewang@foxmail.com