koushiksrivats/FLIP

Questions about training with CelebA-Spoof

Closed this issue · 1 comments

Hello! I have questions about your work.

In the paper, you mentioned "In each of the three protocols, similar to [16], we include CelebA-Spoof [64] as the supplementary training data to increase the diversity of training samples."

  1. Does this mean that you are pre-training with CelebA-Spoof dataset, and then fine-tuning with OCIM?

  2. Or does that mean you are fine-tuning with all together of CelebA-Spoof and 3 datasets of OCIM at the same time, and then test with the left one of OCIM? If so, isn't it OCI+CelebA-Spoof to M instead of OCI to M?

Thanks for sharing your work!

Hi
Thank you for your interest in our work.

Yes, you are correct.
We finetune including CelebA-Spoof as part of the source datasets and test on the dataset that has been left out.

This finetuning strategy and the reporting style were followed from the baseline method and the code for the same can be found in the config file.

Thank you