nii-yamagishilab/ClassNSeg

Dataset splits used for the experiments

nviable opened this issue · 2 comments

Would you happen to have a documentation of sort for the splits that you used for the experimentation?

My group wanted to get the exact data splits so that we hopefully could directly compare the performance of any methods we devise in the future to yours. And this would hopefully help anyone else in the field.

You did mention in your paper that the spit was 704 | 150 | 150, but that is probably for the older FaceForensics dataset (since the new one has 1000 total videos per type). Not really sure how similar the two datasets are.

Each dataset was split into 704 videos for training, 150 forvalidation, and 150 for testing.

FaceForensics++ dataset has JSON files documenting the exact splits which is pretty useful (although theirs is 720 | 140 | 140), but can't be sure whether your group used it since it was somewhat newly introduced.
https://github.com/ondyari/FaceForensics/tree/master/dataset/splits

Edit: added a mention of FaceForensics++ split

As you can see in Table 1 in the paper, we split the FaceForensics++ data to 720 | 140 | 140. We used the information from the provided JSON files.

Got it, thanks!