- The Creation and Detection of Deepfakes: A Survey (arXiv 2020) [paper]
- DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection (arXiv 2020) [paper]
- A Review on Face Reenactment Techniques (I4Tech 2020) [paper]
- What comprises a good talking-head video generation?: A Survey and Benchmark (arXiv 2020) [paper]
- Deep Audio-Visual Learning: A Survey (arXiv 2020) [paper]
- Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR, 2022) [paper]
- Latent Image Animator: Learning to Animate Images via Latent Space Navigation (ICLR, 2022) [paper]
- Finding Directions in GAN’s Latent Space for Neural Face Reenactment (Arxiv, 2022) [paper]
- FSGANv2: Improved Subject Agnostic Face Swapping and Reenactment (PAMI, 2022) [paper]
- PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering (ICCV, 2021) [paper]
- LI-Net: Large-Pose Identity-Preserving Face Reenactment Network (ICME, 2021) [paper]
- One-shot Face Reenactment Using Appearance Adaptive Normalization (AAAI, 2021) [paper]
- One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing (arXiv, 2020) [paper]
- FACEGAN: Facial Attribute Controllable rEenactment GAN (WACV, 2020) [paper]
- LandmarkGAN: Synthesizing Faces from Landmarks (arXiv, 2020) [paper]
- Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars (ECCV, 2020) [paper] [code]
- Mesh Guided One-shot Face Reenactment using Graph Convolutional Networks (MM, 2020) [paper]
- Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment (CVPR, 2020) [paper]
- ReenactNet: Real-time Full Head Reenactment (arXiv, 2020) [paper]
- FReeNet: Multi-Identity Face Reenactment (CVPR, 2020) [paper] [code]
- FaR-GAN for One-Shot Face Reenactment (CVPRW, 2020) [paper]
- One-Shot Identity-Preserving Portrait Reenactment (, 2020) [paper]
- Neural Head Reenactment with Latent Pose Descriptors (CVPR, 2020) [paper] [code]
- ActGAN: Flexible and Efficient One-shot Face Reenactment (IWBF, 2020) [paper]
- Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose (AAAI, 2020) [paper]
- First Order Motion Model for Image Animation (NIPS, 2020) [paper] [code]
- FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis (AAAI, 2019) [paper]
- MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets (AAAI, 2019) [paper]
- Any-to-one Face Reenactment Based on Conditional Generative Adversarial Network (APSIPA, 2019) [paper]
- Make a Face: Towards Arbitrary High Fidelity Face Manipulation (ICCV, 2019) [paper]
- One-shot Face Reenactment (BMVC, 2019) [paper] [code]
- Deferred neural rendering: image synthesis using neural textures (TOG, 2019) [paper]
- Animating Arbitrary Objects via Deep Motion Transfer (CVPR, 2019) [paper] [code]
- FSGAN: Subject Agnostic Face Swapping and Reenactment (ICCV, 2019) [paper] [code]
- GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV, 2018) [paper] [code]
- ReenactGAN: Learning to Reenact Faces via Boundary Transfer (ECCV, 2018) [paper] [code]
- Deep Video Portraits (SIGGRAPH, 2018) [paper]
- X2Face: A Network for Controlling Face Generation Using Images, Audio, and Pose Codes (ECCV, 2018) [paper] [code]
- Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR, 2016) [paper]
- SyncTalkFace: Talking Face Generation with Precise Lip-syncing via Audio-Lip Memory (AAAI, 2022) [paper]
- One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI, 2022) [paper]
- Audio-Driven Talking Face Video Generation with Dynamic Convolution Kernels (TMM, 2022) [paper]
- Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH ASIA, 2021) [paper]
- Imitating Arbitrary Talking Style for Realistic Audio-DrivenTalking Face Synthesis (MM, 2021) [paper] [code]
- Talking Head Generation with Audio and Speech Related Facial Action Units (BMVC, 2021) [paper]
- 3D Talking Face with Personalized Pose Dynamics (TVCG, 2021) [paper]
- Talking Head Generation with Audio and Speech Related Facial Action Units (BMVC, 2021) [paper]
- FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning (ICCV, 2021) [paper]
- AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis (ICCV, 2021) [paper]
- Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion (IJCAI, 2021) [paper]
- Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset (CVPR, 2021) [paper]
- Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR, 2021) [paper] [code]
- Audio-Driven Emotional Video Portraits (CVPR, 2021) [paper] [code]
- Everything's Talkin': Pareidolia Face Reenactment (CVPR, 2021) [paper]
- APB2FaceV2: Real-Time Audio-Guided Multi-Face Reenactment (ICASSP, 2021) [paper] [code]
- MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation (ECCV, 2020) [paper] [code]
- A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild (MM, 2020) [paper] [code]
- Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning (IJCAI, 2020) [paper]
- APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals (ICASSP, 2020) [paper] [code]
- MakeItTalk: Speaker-Aware Talking Head Animation (SIGGRAPH ASIA, 2020) [paper] [code]
- Everybody’s Talkin’: Let Me Talk as You Want (arXiv, 2020) [paper]
- Talking Face Generation by Adversarially Disentangled Audio-Visual Representation(AAAI, 2019) [paper]
- Towards Automatic Face-to-Face Translation (MM, 2019) [paper] [code]
- Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (ICCV, 2019) [paper]
- Learning the Face Behind a Voice (CVPR, 2019) [paper] [code]
- Hierarchical Cross-Modal Talking Face Generation with Dynamic Pixel-Wise Loss (CVPR, 2019) [paper] [code]
- Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks (ICASSP, 2019) [paper] [code]
- Face Reconstruction from Voice using Generative Adversarial Networks (NIPS, 2019) [paper]
- Lip movements generation at a glance (ECCV, 2018) [paper]