Repository containing the materials for Kaggle Deepfake detection competition
Deepfake techniques, which present realistic AI-generated videos of people doing and saying fictional things, have the potential to have a significant impact on how people determine the legitimacy of information presented online. These content generation and modification technologies may affect the quality of public discourse and the safeguarding of human rights—especially given that deepfakes may be used maliciously as a source of misinformation, manipulation, harassment, and persuasion. Identifying manipulated media is a technically demanding and rapidly evolving challenge that requires collaborations across the entire tech industry and beyond. Link to competition on Kaggle. Examples of DeepFakes
-
Unmasking DeepFakes with simple Features: The method is based on a classical frequency domain analysis followed by a basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, this approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. Github repo
-
FaceForensics++: Learning to Detect Manipulated Facial Images: This paper examines the realism of state-of- the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. Github repo
-
In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking: Method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Method is tested over benchmarks of eye-blinking detection datasets and also show promising performance on detecting videos generated with DeepFake. Github repo
-
USE OF A CAPSULE NETWORK TO DETECT FAKE IMAGES AND VIDEOS: "Capsule-Forensics" method to detect fake images and videos. Github repo
-
Exposing DeepFake Videos By Detecting Face Warping Artifacts: Deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Github repo
-
Limits of Deepfake Detection: A Robust Estimation Viewpoint: This work gives a generalizable statistical framework with guarantees on its reliability. In particular, we build on the information-theoretic study of authentication to cast deepfake detection as a hypothesis testing problem specifically for outputs of GANs, themselves viewed through a generalized robust statistics framework.