awesome neural rendering papers Awesome

A collection of resources on neural rendering.

Contributing

If you think I have missed out on something (or) have any suggestions (papers, implementations and other resources), feel free to pull a request

Feedback and contributions are welcome!

Table of Contents

Intruduction of Neural Rendering

Neural Rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training.

Ayush Tewari et. al. define Neural Rendering as

Deep image or video generation approaches that enable explicit or implicit control of scene properties such as illumination, camera parameters, pose, geometry, appearance, and semantic structure.

A typical neural rendering approach takes as input images corresponding to certain scene conditions (for example, viewpoint, lighting, layout, etc.), builds a “neural” scene representation from them, and “renders” this repre- sentation under novel scene properties to synthesize novel images.

Given high-quality scene specifications, Classic Rendering Methods can render photorealistic images for a variety of complex real- world phenomena. Moreover, rendering gives us explicit editing control over all the elements of the scene—camera viewpoint, lighting, geometry and materials. However, building high-quality scene models, especially directly from images, requires significant manual effort, and automated scene modeling from images is an open research problem. On the other hand, Deep Generative Networks are now starting to produce visually compelling images and videos either from random noise, or conditioned on certain user specifications like scene segmentation and layout. However, they do not yet allow for fine-grained control over scene appearance and cannot always handle the complex, non-local, 3D interactions between scene properties. In contrast, neural rendering methods hold the promise of combining these approaches to enable controllable, high-quality synthesis of novel images from input images/videos.

Related Surveys and Course Notes

State of the Art on Neural Rendering.
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B Goldman, Michael Zollhöfer.
Eurographics 2020.

3D Scene Generation.
Angel X. Chang, Daniel Ritchie, Qixing Huang, Manolis Savva.
CVPR 2019 Workshop.

Inverse Rendering

NiLBS: Neural Inverse Linear Blend Skinning.
Timothy Jeruzalski, David I.W. Levin, Alec Jacobson, Paul Lalonde, Mohammad Norouzi, Andrea Tagliasacchi.
arxiv 2020. [PDF]

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer.
Wenzheng Chen, Jun Gao, Huan Ling, Edward J. Smith, Jaakko Lehtinen, Alec Jacobson, Sanja Fidler.
NeurIPS 2019. [PDF]

InverseRenderNet: Learning Single Image Inverse Rendering.
Ye Yu, William A. P. Smith.
CVPR 2019. [PDF] [Github] [IIW Dataset]

Learning Inverse Rendering of Faces from Real-world Videos.
Yuda Qiu, Zhangyang Xiong, Kai Han, Zhongyuan Wang, Zixiang Xiong, Xiaoguang Han.
arxiv, 2020. [PDF] [Github]

Fluid and Smoke Simulation

Wave Curves: Simulating Lagrangian water waves on dynamically deforming surfaces.
Tomas Skrivan, Andreas Soderstrom, John Johansson, Christoph Sprenger, Ken Museth, Chris Wojtan.
ACM Transactions on Graphics (SIGGRAPH 2020). [PDF]

Constraint Bubbles and Affine Regions: Reduced Fluid Models for Efficient Immersed Bubbles and Flexible Spatial Coarsening.
Ryan Goldade, Mridul Aanjaneya, Christopher Batty.
TOG 2020. [PDF] [Project] [Github]

Chemomechanical Simulation of Soap Film Flow on Spherical Bubbles.
Weizhen Huang, Julian Iseringhausen, Tom Kneiphof, Ziyin Qu, Chenfanfu Jiang, Matthias B. Hullin.
TOG 2020. [PDF] [Project] [Github]

Fast and Scalable Turbulent Flow Simulation with Two-Way Coupling.
Wei Li, Yixin Chen, Mathieu Desbrun, Changxi Zhang, Xiaopei Liu.
SIGGRAPH 2020. [PDF]

Lagrangian Neural Style Transfer for Fluids.
Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, Barbara Solenthaler.
SIGGRAPH 2020. [PDF]

Transport-Based Neural Style Transfer for Smoke Simulations.
Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, Barbara Solenthaler.
SIGGRAPH ASIA 2019. [PDF]

Constraint Bubbles and Affine Regions: Reduced Fluid Models for Efficient Immersed Bubbles and Flexible Spatial Coarsening.
Ryan Goldade, Mridul Aanjaneya, Christopher Batty.
SIGGRAPH 2020. [PDF]

Differentiable Physics-Based Simulation

DiffTaichi: Differentiable Programming for Physical Simulation.
Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, Fredo Durand.
ICLR 2020. [PDF] [Github]

A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising.
Kaixuan Wei, Ying Fu, Jiaolong Yang, Hua Huang.
CVPR 2020. [PDF] [Github]

Use the Force, Luke! Learning to Predict Physical Forces by Simulating Effects.
Kiana Ehsani, Shubham Tulsiani, Saurabh Gupta, Ali Farhadi, Abhinav Gupta.
CVPR 2020. [PDF]

SAPIEN: A SimulAted Part-based Interactive ENvironment.
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, Hao Su.
CVPR 2020. [PDF] [Project] [Documentation] [Github]

Differentiable Programming for Physical Simulation.
Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand.
ICLR 2020. [PDF] [Github]

GarNet: A Two-Stream Network for Fast and Accurate 3D Cloth Draping.
Erhan Gundogdu, Victor Constantin, Amrollah Seifoddini, Minh Dang, Mathieu Salzmann, Pascal Fua.
ICCV 2019. [PDF] [Supplementary Material] [Project] [Dataset]

Neural Hair Rendering

Neural Hair Rendering.
Menglei Chai, Jian Ren, Sergey Tulyakov.
arxiv 2020. [PDF]

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing.
Zhentao Tan, Menglei Chai, Dongdong Chen, Jing Liao, Qi Chu, Lu Yuan, Sergey Tulyakov, Nenghai Yu.
SIGGRAPH 2020. [PDF]

Individual Object Manipulation

Self-Supervised Scene De-occlusion.
Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, and Chen Change Loy.
CVPR 2020. [PDF] [Github] [Project] [Demo]

3DLSN: End-to-End Optimization of Scene Layout.
Andrew Luo, Zhoutong Zhang, Jiajun Wu, Joshua B. Tenenbaum.
CVPR 2020. [PDF] [Project]

DJRN: Detailed 2D-3D Joint Representation for Human-Object Interaction.
Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, Cewu Lu.
CVPR 2020. [PDF] [Github]

Learning to Manipulate Individual Objects in an Image.
Yanchao Yang, Yutong Chen, Stefano Soatto.
arxiv 2020. [PDF]

AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph.
Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng.
IVCJ 2018. [PDF] [Project]

Semantic Photo Synthesis and Manipulation

pix2pixHD: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs.
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro.
CVPR 2018. [PDF] [Github]

SPADE: Semantic Image Synthesis with Spatially-Adaptive Normalization.
Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu.
CVPR 2019. [PDF] [Github]

Semantic Bottleneck Scene Generation.
Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic.
arxiv, 2019. [PDF]

Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation.
Hao Tang, Dan Xu, Yan Yan, Philip H. S. Torr, Nicu Sebe.
CVPR 2020. [PDF] [Github]

SelectionGAN: Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation.
Hao Tang, Dan Xu, Nicu Sebe, Yanzhi Wang, Jason J. Corso, Yan Yan.
VPR 2019. [PDF] [Github]

Texture and Surface Mapping

GPU-Accelerated Mobile Multi-view Style Transfer.
Puneet Kohli, Saravana Gunaseelan, Jason Orozco, Yiwen Hua, Edward Li, Nicolas Dahlquist.
arxiv 2020. [PDF]

Leveraging 2D Data to Learn Textured 3D Mesh Generation.
Paul Henderson, Vagia Tsiminaki, Christoph H. Lampert.
CVPR 2020. [PDF]

Articulation-aware Canonical Surface Mapping.
Nilesh Kulkarni, Abhinav Gupta, David F. Fouhey, Shubham Tulsiani.
CVPR 2020. [PDF] [Github] [Project]

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World.
Shangbang Long, Cong Yao.
CVPR 2020. [PDF] [Github]

Adversarial Texture Optimization from RGB-D Scans.
Jingwei Huang, Justus Thies, Angela Dai, Abhijit Kundu, Chiyu Jiang, Leonidas Guibas, Matthias Nießner, Thomas Funkhouser.
CVPR 2020. [PDF] [Project] [Github] [pyRender]

CSM: Canonical Surface Mapping via Geometric Cycle Consistency.
Nilesh Kulkarni, Abhinav Gupta, Shubham Tulsiani.
ICCV 2019. [PDF] [Github] [Project]

Texture Mapping for 3D Reconstruction with RGB-D Sensor.
Yanping Fu, Qingan Yan, Long Yang, Jie Liao, Chunxia Xiao.
CVPR 2018. [PDF] [thecvf] [Code on Github]

Let There Be Color! - Large-Scale Texturing of 3D Reconstructions.
Waechter, Michael and Moehrle, Nils and Goesele, Michael.
ECCV 2018. [PDF] [Project] [Github] [rayint] [Eigen] [Multi-View Environment] [mapMAP]

Learning Category-Specific Mesh Reconstruction from Image Collections.
Angjoo Kanazawa, Shubham Tulsiani Alexei A. Efros, Jitendra Malik.
ECCV 2018. [Github] [Project]

Texture Fields: Learning Texture Representations in Function Space.
Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger.
ICCV 2019. [PDF]

AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation.
Thibault Groueix, Matthew Fisher, Vladimir Kim, Bryan Russell, Mathieu Aubry.
CVPR 2018. [PDF] [Project] [Github]

Learning Elementary Structures For 3D Shape Generation And Matching.
Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, Mathieu Aubry.
arxiv, 2019. [PDF] [Project] [Github]

Learning to Generate Textures on Meshes.
Amit Raj, Cusuh Ham, Connelly Barnes, Vladimir Kim, Jingwan Lu, James Hays.
CVPR Deep Generative Models for 3D Understanding 2019 (Best Paper). [PDF]

Unsupervised Texture Transfer from Images to Model Collections.
Tuanfeng Yand Wang, Hao Su, Qixing Huang, Jingwei Huang, Leonidas J. Guibas, Niloy J. Mitra.
SIGGRAPH Asia 2016. [PDF] [Project] [Data]

Neural Scene Representation and Rendering

CoReNet: Coherent 3D Scene Reconstruction From a Single RGB Image.
Stefan Popov, Pablo Bauszat, Vittorio Ferrari.
arxiv 2020. [PDF]

Single-View View Synthesis with Multiplane Images.
Richard Tucker and Noah Snavely.
CVPR 2020. [PDF] [Project]

LIMP: Learning Latent Shape Representations with Metric Preservation Priors.
Luca Cosmo, Antonio Norelli, Oshri Halimi, Ron Kimmel, Emanuele Rodolà.
arxiv 2020. [PDF]

Learning 3D Part Assembly from a Single Image.
Yichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, Leonidas Guibas.
arxiv 2020. [PDF]

Curriculum DeepSDF.
Yueqi Duan, Haidong Zhu, He Wang, Li Yi, Ram Nevatia, Leonidas J. Guibas.
arxiv, 19 Mar 2020. [PDF] [Github]

PolyGen: An Autoregressive Generative Model of 3D Meshes.
Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, Peter W. Battaglia.
arxiv, 23 Feb 2020. [PDF]

Self-supervised Learning of 3D Objects from Natural Images.
Hiroharu Kato, Tatsuya Harada.
arxiv, 20 Nov. 2019. [PDF] [Project]

BlockGAN: Learning 3D Object-Aware Scene Representations from Unlabelled Images.
Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra.
arxiv, 20 Feb 2020. [PDF] [Project]

DualSDF: Semantic Shape Manipulation using a Two-Level Representation.
Zekun Hao, Hadar Averbuch-Elor, Noah Snavely, Serge Belongie.
CVPR 2020. [PDF]

Learning a Neural 3D Texture Space from 2D Exemplars.
Philipp Henzler, Niloy J. Mitra, Tobias Ritschel.
CVPR 2020. [PDF] [Project]

Neural Contours: Learning to Draw Lines from 3D Shapes.
Difan Liu, Mohamed Nabail, Aaron Hertzmann, Evangelos Kalogerakis.
CVPR 2020. [PDF] [Github]

Pix2Shape: Towards Unsupervised Learning of 3D Scenes from Images using a View-based Representation.
Sai Rajeswar, Fahim Mannan, Florian Golemo, Jérôme Parent-Lévesque, David Vazquez, Derek Nowrouzezahrai, Aaron Courville.
IJCV 2020. [PDF]

VCN: Volumetric Correspondence Networks for Optical Flow.
Gengshan Yang, Deva Ramanan.
NeurIPS 2019. [PDF] [GitHub] [Project]

Transformable Bottleneck Networks.
Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, Linjie Luo.
ICCV 2019. [PDF]

Equivariant Multi-View Networks.
Carlos Esteves, Yinshuang Xu, Christine Allen-Blanchette, Kostas Daniilidis.
ICCV 2019. [PDF]

DeepVoxels: Learning Persistent 3D Feature Embeddings.
Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer.
CVPR 2019 (Oral). [Project] [PDF] [Code]

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation.
eong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove.
CVPR 2019. [PDF] [Github]

DeepSDF x Sim(3): Extending DeepSDF for automatic 3D shape retrieval and similarity transform estimation.
Oladapo Afolabi, Allen Yang, Shankar S. Sastry.
arxiv 2020. [PDF]

Learning View Priors for Single-view 3D Reconstruction.
Hiroharu Kato, Tatsuya Harada.
CVPR 2019. [PDF] [Project] [Github]

HoloGAN: Unsupervised Learning of 3D Representations from Natural Images.
Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt Yong-liang Yang.
ICCV 2019. [PDF] [GitHub]

C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion.
David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, Andrea Vedaldi.
ICCV 2019. [PDF] [Github] [Project]

CSM: Canonical Surface Mapping via Geometric Cycle Consistency.
Nilesh Kulkarni, Abhinav Gupta, Shubham Tulsiani.
ICCV 2019. [PDF] [Github] [Project]

Novel-View Synthesis for Objects and Scenes

Novel-View Synthesis

Neural Point-Based Graphics.
Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, Victor Lempitsky.
arxiv 2020. [PDF] [Project]

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng.
arxiv, 19 Mar 2020. [PDF] [Project] [Gtihub-Tensorflow] [krrish94-PyTorch] [yenchenlin-PyTorch]

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations.
Vincent Sitzmann, Michael Zollhöfer, Gordon Wetzstein.
NeurIPS 2019 (Oral, Honorable Mention "Outstanding New Directions"). [PDF] [Project] [Github] [Dataset]

LLFF: Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines.
Ben Mildenhall, Pratul Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar.
SIGGRAPH 2019. [PDF] [Project] [Github]

Neural Volumes: Learning Dynamic Renderable Volumes from Images.
Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh.
SIGGRAPH 2019. [PDF] [Github]

Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image Synthesis.
Jogendra Nath Kundu, Siddharth Seth, Varun Jampani, Mugalodi Rakesh, R. Venkatesh Babu, Anirban Chakraborty.
CVPR 2020. [PDF]

IGNOR: Image-guided Neural Object Rendering.
Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner.
ICLR 2020. arxiv, 26 Nov 2018 (15 Jan 2020). [PDF] [Project]

Monocular Neural Image Based Rendering with Continuous View Control.
Xu Chen, Jie Song, Otmar Hilliges.
ICCV 2019. [PDF]

Extreme View Synthesis.
Inchang Choi, Orazio Gallo, Alejandro Troccoli, Min H. Kim, Jan Kautz.
ICCV 2019. [PDF]

Transformable Bottleneck Networks.
Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, Linjie Luo.
ICCV 2019. [PDF]

View Independent Generative Adversarial Network for Novel View Synthesis.
Xiaogang Xu, Ying-Cong Chen, Jiaya Jia.
ICCV 2019. [PDF]

Light, Reflectance, lluminance and Shade

Enlighten Me: Importance of Brightness and Shadow for Character Emotion and Appeal.
Pisut Wisessing, Katja Zibrek, Douglas W. Cunningham, John Dingliana, Rachel McDonnell.
TOG 2020. [PDF]

Portrait Shadow Manipulation.
Xuaner Cecilia Zhang, J onathan T. Barron, Yun-Ta Tsai, Rohit Pandey, Xiuming Zhang, Ren Ng, David E. Jacobs.
SIGGRAPH 2020. [PDF] [Project]

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination.
Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron, Richard Tucker, Noah Snavely.
CVPR 2020. [PDF] GitHub] [Project]

Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images.
Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, Ravi Ramamoorthi.
CVPR 2020. [PDF]

Neural Illumination: Lighting Prediction for Indoor Environments.
Shuran Song and Thomas Funkhouser.
CVPR 2019. [PDF] [Project]

Learning to Shade Hand-drawn Sketches.
Qingyuan Zheng, Zhuoru Li, Adam Bargteil.
CVPR 2020. [PDF]

Generating Digital Painting Lighting Effects via RGB-space Geometry.
Lvmin Zhang, Edgar Simo-Serra, Yi Ji, and Chunping Liu.
SIGGRAPH 2020 (TOG 2020). [Priject] [Github]

Deep Single-Image Portrait Relighting.
Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs.
ICCV 2019. [PDF] [Github] [Project] [DPR Dataset]

Single Image Portrait Relighting.
Tiancheng Sun, Jonathan T. Barron, Yun-Ta Tsai, Zexiang Xu, Xueming Yu, Graham Fyffe, Christoph Rhemann, Jay Busch, Paul Debevec, Ravi Ramamoorthi.
SIGGRAPH 2019. [PDF]

Multi-view Relighting using a Geometry-Aware Network.
Julien Philip, Michael Gharbi, Tinghui Zhou, Alexei (Alyosha) Efros, George Drettakis.
SIGGRAPH 2019. [PDF]

Illumination Decomposition for Photograph with Multiple Light Sources.
Ling Zhang, Qingan Yan, Zheng Liu, Hua Zou, Chunxia Xiao.
TIP 2017. [PDF] [Github]

Learning to Predict Indoor Illumination from a Single Image.
Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde.
ACM Transactions on Graphics (SIGGRAPH Asia), 2017. [PDF] [Dataset] [Homepage]

Deep Parametric Indoor Lighting Estimation.
Marc-André Gardner, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Christian Gagné, and Jean-François Lalonde.
ICCV 2019. [PDF] [Supplementary material] [Laval Indoor HDR Database and Depth] [Project]

Fast Spatially-Varying Indoor Lighting Estimation.
Mathieu Garon, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-François Lalonde.
CVPR 2019. [PDF] [Supplementary material] [Project] [Lavel Indoor Spatially Varying HDR Dataset / 79 HDR Light Probes]

GLoSH: Global-Local Spherical Harmonics for Intrinsic Image Decomposition.
Hao Zhou, Xiang Yu, David W Jacobs.
ICCV 2019. [PDF] [Supplement] [Poster] [Spherical Harmonic Tools]

SfSNet: Learning Shape, Reflectance and llluminance of Faces in the Wild.
Soumyadip Sengupta, Angjoo Kanazawa, Carlos D. Castillo, David W. Jacobs.
CVPR 2018. [Project] [PDF] [Github]

Occlusion-aware 3D Morphable Models and an Illumination Prior for Face Image Analysis.
Bernhard Egger, Sandro Schoenborn, Andreas Schneider, Adam Kortylewski, Andreas Morel-Forster, Clemens Blumer and Thomas Vetter.
IJCV 2018. [BIP Dataset] [PDF]

DNR: A Neural Rendering Framework for Free-Viewpoint Relighting.
Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu.
arxiv, 26 Nov 2019. [PDF]

Motion Transfer, Retargeting, Reenactment, Dubbing and Animation

[awesome-human-motion]

FaR-GAN for One-Shot Face Reenactment.
Hanxiang Hao, Sriram Baireddy, Amy R. Reibman, Edward J. Delp.
AI for content creation workshop at CVPR 2020. [PDF]

Skeleton-Aware Networks for Deep Motion Retargeting.
Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, Baoquan Chen.
SIGGRAPH 2020. [Github] [Project]

Unpaired Motion Style Transfer from Video to Animation.
Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen.
SIGGRAPH 2020. [Github] [Project]

MakeItTalk: Speaker-Aware Talking Head Animation.
Yang Zhou, DIngzeyu Li, Xintong Han, Evangelos Kalogerakis, Eli Shechtman, Jose Echevarria.
arriv, 2020. [PDF]

One-Shot Identity-Preserving Portrait Reenactment.
Sitao Xiang, Yuming Gu, Pengda Xiang, Mingming He, Koki Nagano, Haiwei Chen, Hao Li.
arriv, 2020. [PDF]

Neural Head Reenactment with Latent Pose Descriptors.
Egor Burkov, Igor Pasechnik, Artur Grigorev, Victor Lempitsky.
CVPR 2020. [PDF]

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation.
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt.
arriv, 2020. [PDF]

Text-based Editing of Talking-head Video.
Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B Goldman Kyle Genova, Zeyu Jin, Christian Theobalt, Maneesh Agrawala.
SIGGRAPH 2019. [PDF] [Project]

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images.
A. Tewari, M. Elgharib, G. Bharaj, F. Bernard, H-P. Seidel, P. Perez, M. Zollhöfer, C.Theobalt.
CVPR 2020. [PDF] [Project]

TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting.
Zhuoqian Yang, Wentao Zhu, Wayne Wu, Chen Qian, Qiang Zhou, Bolei Zhou, Chen Change Loy.
CVPR 2020. [PDF] [Github] [Project]

Human Motion Transfer from Poses in the Wild.
Jian Ren, Menglei Chai, Sergey Tulyakov, Chen Fang, Xiaohui Shen, Jianchao Yang.
arxiv 2020. [PDF]

First Order Motion Model for Image Animation.
Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe.
NeurIPS 2019. [PDF] [Project] [Github]

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation.
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt.
arxiv, 14 Jan 2020. [PDF]

Deferred Neural Rendering: Image Synthesis using Neural.
Justus Thies, Michael Zollhöfer, Matthias Nießner.
SIGGRAPH 2019. [PDF]

LOGAN: Unpaired Shape Transform in Latent Overcomplete Space.
Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, Hao Zhang.
SIGGRAPH Asia, 2019. [PDF]

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation.
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt.
arxiv, 14 Jan 2020. [PDF]

FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis.
Kuangxiao Gu, Yuqian Zhou, Thomas Huang.
AAAI 2020. [PDF] [GitHub]

Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis.
Wen Liu, Zhixin Piao, Jie Min, Wenhan Luo, Lin Ma, Shenghua Gao.
ICCV 2019. [PDF] [HomePage] [Github].

Learning Character-Agnostic Motion for Motion Retargeting in 2D.
Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or.
SIGGRAPH 2019. [PDF] [Github] [Project]

Progressive Pose Attention Transfer for Person Image Generation.
Zhen Zhu, Tengteng Huang, Baoguang Shi, Miao Yu, Bofei Wang, Xiang Bai.
CVPR 2019. [Project] [PDF]

Textured Neural Avatars.
Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, Victor Lempitsky.
CVPR 2019 (oral). [PDF] [Project]

Appearance Composing GAN: A General Method for Appearance-Controllable Human Video Motion Transfer.
Dongxu Wei, Haibin Shen, Kejie Huang.
arxiv, 25 Nov 2019. [PDF]

EBT: Everybody's Talkin': Let Me Talk as You Want.
Linsen Song, Wayne Wu, Chen Qian, Ran He, Chen Change Loy.
arxiv, 15 Jan 2020. [PDF] [Project]

Photo Wake-Up: 3D Character Animation from a Single Photo.
Chung-Yi Weng, Brian Curless, Ira Kemelmacher-Shlizerman.
CVPR 2019. [PDF] [Project]

License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.