/awesome-video-generate

A curated list of awesome Video generate resources and projects

awesome-video-generate-since-2022

A curated list of awesome Video generate resources and projects since 2022(gpt-3 droped in 2022)

Video Diffusion

Table of Contents

Open-source Toolboxes and Foundation Models

Evaluation Benchmarks and Metrics

Video Generation

Controllable Video Generation

Motion Customization

Long Video / Film Generation

Video Generation with Physical Prior / 3D

Video Editing

Long-form Video Generation and Completion

Human or Subject Motion

AI Safety for Video Generation

Video Enhancement and Restoration

Audio Synthesis for Video

Human Feedback for Video Generation

Policy Learning with Video Generation

3D / NeRF

World Model

Video Understanding

Healthcare and Biology

Text-To-Video

Table of Contents

Text To Video Generators

What I want to recommend most among AI video generators is DeepBrain’s AI Studios, which has a hyperrealistic Avatar. It has superior quality among the AI video SaaS. You can simply produce a video by typing a script. Creating AI video is much easier because it has a chroma key function and a ppt to video function. You can try making it for free.

  • Select your AI presenter first (AI Studios provides more than 12 avatars for your AI films; it supports 80+ TTS languages and native AI avatars).
  • Enter your AI video script. It also has a function called ChatGPT, so it can automatically generate scripts.
  • Create your AI video and then download, stream, or translate it.

The use of the best AI video generators has revolutionized the way businesses create marketing content

Runway AI is an innovative set of resources for video editors that taps into the potential of AI. It comes equipped with a variety of powerful features, such as:

  • Green screen tools: The background can be edited out of any video.
  • Erase and replace: Choose any frame from a movie (such as a tossed ball in a game of catch) and have Runway swap it out with anything else.
  • Infinite image: Create a picture using AI and then use it to fill up more space than it would in the original.

Video editors will find these AI tools revolutionary. Even a novice may use it to execute complex video editing tasks in a matter of seconds. Nevertheless, Runway AI has even more in store. The Gen-1 and Gen-2 enhancements are a quantum leap forward for video editing.

It is one of the best AI video generators for using artificial intelligence’s cutting edge tech. You can try it for free by clicking here.

Kaiser AI is a platform that allows you to generate high-quality videos using artificial intelligence. They produce AI videos for artists like Linkin Park, Kid Cudi, and Mike Shinoda, and more.

You can choose from a variety of templates, customize the characters, backgrounds, and dialogues, and let Kaiber AI video do the rest. You can also upload your own scripts and voiceovers, and Kaiber AI video will match them with the best visuals and animations.

Are you looking for the best AI video generators? We explained the most popular in 2023, like Synthesia, Kaiber, Runway, and more. Keep reading!

It is one of the best AI video generators in the market. If you want to learn more about Kaiber AI and see some examples of what you can create with it, visit their website. You can also sign up for a free trial and start creating your own videos in minutes. It is one of the best AI video generators for engaging videos.

Stable Diffusion Videos is a free online text-to-video AI generator to make videos from prompts. It is a text-to-video AI generator that uses the Stable Diffusion algorithm to generate videos from text prompts. It is a free online text-to-video AI generator to make videos from prompts. It is a text-to-video AI generator that uses the Stable Diffusion algorithm to generate videos from text prompts.

Another one text-to-video AI generator Deforum generates animations by constructing frames that take their forefathers into consideration. Using Deforum SD, it is now simpler than ever to produce coherent films and animations from Stable Diffusion outputs.

Make-A-Video a new AI text-t-video generator from Meta makes amusing short films with just a few phrases.

The research, which was created to enable text-to-video generation, is based on recent developments in text-to-image generating technology. In addition to text, photographs and other videos can also be used to make movies. Although a time axis has been added, this is still the same diffusion.

Using images and descriptions, the system learns how the world appears and how it is typically described. Unlabeled movies are also used to help students comprehend how the world operates.

With just a few words or lines of text, you may use this information to create funny, original videos that will help you bring your imagination to life.

With the help of VEED.io‘s robust A.I. technology and user-friendly interface, you can quickly produce great videos online. It can be used as a video editor to chop, crop, add subtitles, and more, or to convert any text into videos.

Here’s how it functions:

  • Choose a stock video or upload your own
  • You can edit the video by adding text, photos, etc.
  • Download and export the movie

A fantastic online tool for producing AI films is Lumen5. For the purpose of producing quality video material, more than 800,000 users use Lumen5. The best thing about it is how simple it is to use and how little expertise in video editing is required. Artificial intelligence can let you quickly generate videos from scratch or from scratch in a matter of minutes.

Here’s how it functions:

  • Type a script or text here.
  • Based on the screenplay, Lumen5 will automatically select the ideal audio and images.
  • You can upload your own text, music, and logos.
  • Download and distribute the movie

You may convert your blog entries and articles into interesting movies with the help of the amazing AI-powered content creation tool called Design.AI. It can also help you swiftly design logos, films, and banners.

Here’s how it functions:

  • Insert your text or script first.
  • Choose an industry.
  • Choose a voice you prefer and a style of video.
  • The AI will immediately produce a video preview after this. After that, you can modify your video and add text and music to make it more visually appealing.

One of the amazing AI video generators is Synthesia, which makes it simple to make realistic AI videos in a matter of minutes. Synthesia is an AI video creator that uses advanced natural language processing (NLP) and machine learning algorithms to create high-quality videos from text in over 50 languages without any actors, cameras, or mics. Syhthesia is a great option if you want to make budget-friendly videos that appear professional. To build your own AI video, follow these three simple steps.

  • Select your AI presenter first (Synthesia provides more than 40 avatars for your AI films; alternatively, you can make your own avatar).
  • Enter your AI video script secondly.
  • Third, create your AI video and then download, stream, or translate it.

You can convert text into videos using the effective video editing program InVideo. You can use more than 5000 layouts, iStock media, a music library, filters, and other features.

For simple video conversion of text-based information, InVideo provides more than 50 AI-powered themes. From their library of 5000+ configurable templates, you may make all different types of videos, including video commercials, promos, YouTube videos, intros, and more.

You must choose any template or theme and type any text when creating videos. That’s it; you can quickly create an incredible AI video with that script. You can add media, such as audio, video, text, and more.

You can use GliaCloud to seamlessly create professional-looking videos from existing text content in minutes. There’s no need for special equipment or prior knowledge of video editing software. Simply upload your article or post the URL, and it will automatically create an engaging video.

You can then preview and edit this script if required before generating an HD-quality video file ready to upload to your website or social media channels.

Fliki is a tool that converts text into audio and video in under a minute, utilizing artificially intelligent voices.

With just a few simple steps, Fliki can transform your blog into narrated videos, like Lumen5, podcasts, or audiobooks. 850+ voices are available on Fliki, including 77+ languages and 100+ regional dialects.

It is one of the best AI video generators for content creation. Click here and try Fliki AI.

Text to Video Papers

2023

  • Text-To-4D Dynamic Scene Generation, Uriel Singer et al. [Paper] [Project]

2022

  • Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation, Jay Zhangjie Wu et al. [Paper] [Project] [Code]
  • MagicVideo: Efficient Video Generation With Latent Diffusion Models, Daquan Zhou et al. [Paper] [Project]
  • Phenaki: Variable Length Video Generation From Open Domain Textual Description, Ruben Villegas et al. [Paper]
  • Imagen Video: High Definition Video Generation with Diffusion Models, Jonathan Ho et al. [Paper] [Project]
  • Text-driven Video Prediction, Xue Song et al. [Paper]
  • Make-A-Video: Text-to-Video Generation without Text-Video Data, Uriel Singer et al. [Paper] [Project] [Short read] [Code]
  • StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation, Adyasha Maharana et al. [Paper] [Code]
  • Word-Level Fine-Grained Story Visualization, Bowen Li et al. [Paper] [Code]
  • CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers, Wenyi Hong et al. [Paper] [Code]
  • Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning, Yogesh Balaji et al. [Paper] [Code] Project
  • Video Diffusion Models, Jonathan Ho et al. [Paper] [Project]

Human-Video-Generation

Table of Contents

Text Guided Human Video Generation

Audio Guided Human Video Generation

Performance Video Generation

Co-Speech Gesture Video Generation

Pose Guided Human Video Generation

Applications

Datasets

Conditional-content-generation

Contents

Papers

Music-Driven motion generation

Taming Diffusion Models for Music-driven Conducting Motion Generation
NUS, AAAI 2023 Summer Symposium, [Code]

Music-Driven Group Choreography
AIOZ AI, CVPR'23

Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Illinois Institute of Technology, ICLR'23, [Code]

Magic: Multi Art Genre Intelligent Choreography Dataset and Network for 3D Dance Generation
Tsinghua University, 7 Dec 2022

Pretrained Diffusion Models for Unified Human Motion Synthesis
DAMO Academy, Alibaba Group, 6 Dec 2022

EDGE: Editable Dance Generation From Music
Stanford University, 19 Nov 2022

You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection
MSRA, NeurIPS'22

GroupDancer: Music to Multi-People Dance Synthesis with Style Collaboration
Tsinghua University, ACMMM'22

A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres
Yonsei University, CVPR 2022, [Code]

Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
NTU, CVPR 2022 (Oral), [Code]

Dance Style Transfer with Cross-modal Transformer
KTH, 22 Aug 2022, [Upcoming Code]

Music-driven Dance Regeneration with Controllable Key Pose Constraints
Tencent, 8 July 2022

Text-Driven motion generation

ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
NTU, CVPR'23, [Code]

TEMOS: Generating diverse human motions from textual descriptions
ENPC, CVPR'23

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
Peking University, CVPR'23

Human Motion Diffusion as a Generative Prior
Anonymous Authors, [Code]

T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations
Tencent AI Lab, 16 Jan 2023, [Code]

Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
Beihang University, 10 Jan 2023

Executing your Commands via Motion Diffusion in Latent Space
Tencent, 8 Dec 2022, [Code]

MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
Seoul National University, AAAI 2023 Oral, [Code]

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
Max Planck Institute for Informatics, 8 Dec 2022

Executing your Commands via Motion Diffusion in Latent Space
Tencent PCG, 8 Dec 2022, [Upcoming Code]

UDE: A Unified Driving Engine for Human Motion Generation
Xiaobing Inc, 29 Nov 2022, [Upcoming Code]

MotionBERT: Unified Pretraining for Human Motion Analysis
SenseTime Research, 12 Oct 2022, [Code]

Human Motion Diffusion Model
Tel Aviv University, 3 Oct 2022, [Code]

FLAME: Free-form Language-based Motion Synthesis & Editing
Korea University, 1 Sep 2022

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
NTU, 22 Aug 2022, [Code]

TEMOS: Generating diverse human motions from textual descriptions
MPI, ECCV 2022 (Oral), [Code]

GIMO: Gaze-Informed Human Motion Prediction in Context
Stanford University, ECCV 2022, [Code]

MotionCLIP: Exposing Human Motion Generation to CLIP Space
Tel Aviv University, ECCV 2022, [Code]

Generating Diverse and Natural 3D Human Motions from Text
University of Alberta, CVPR 2022, [Code]

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
NTU, SIGGRAPH 2022, [Code]

Audio-Driven motion generation

For more recent paper, you can find from here

Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
NTU, CVPR'23, [Code]

GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis
Zhejiang University, ICLR'23, [Code]

DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model
Macau University of Science and Technolog, 24 Jan 2023

DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis
Tsinghua University, 10 Jan 2023

Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation
University of Wrocław, 6 Jan 2023, [Incoming Code]

Generating Holistic 3D Human Motion from Speech
Max Planck Institute for Intelligent Systems, 8 Dev 2022

Audio-Driven Co-Speech Gesture Video Generation
NTU, 5 Dec 2022

Listen, denoise, action! Audio-driven motion synthesis with diffusion models
KTH Royal Institute of Technology, 17 Nov 2022

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
York University, 23 Sep 2022, [Code]

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
The University of Tokyo, ECCV 2022, [Code]

EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
Nanjing University, SIGGRAPH 2022, [Code]

Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation
The Chinese University of Hong Kong, CVPR 2022, [Code]

SEEG: Semantic Energized Co-speech Gesture Generation
Alibaba DAMO Academy, CVPR 2022, [Code]

FaceFormer: Speech-Driven 3D Facial Animation with Transformers
The University of Hong Kong, CVPR 2022, [Code]

Freeform Body Motion Generation from Speech
JD AI Research, 4 Mar 2022, [Code]

Human motion prediction

For more recent more, you can find from here

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
UIUC, ICCV 2023, [Code]

Stochastic Multi-Person 3D Motion Forecasting
UIUC, ICLR 2023 (Spotlight), [Code]

HumanMAC: Masked Motion Completion for Human Motion Prediction
Tsinghua University, ICCV 2023, [Code]

BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
University of Barcelona, 25 Nov 2022, [Upcoming Code]

Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
UIUC, ECCV 2022 (Oral), [Code]

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
NAVER LABS, ECCV'2022, [Code]

NeMF: Neural Motion Fields for Kinematic Animation
Yale University, NeurIPS 2022 (Spotlight), [Code]

Multi-Person Extreme Motion Prediction
Inria University, CVPR 2022, [Code]

MotionMixer: MLP-based 3D Human Body Pose Forecasting
Mercedes-Benz, IJCAI 2022 (Oral), [Code]

Motion Applications

MIME: Human-Aware 3D Scene Generation
MPI

Scene Synthesis from Human Motion
Stanford University, SIGGRAPH Asia 2022, [Code]

TEACH: Temporal Action Compositions for 3D Humans
MPI, 3DV 2022, [Code]

Motion In-betweening via Two-stage Transformers
Zhejiang University, SIGGRAPH Asia 2022

Skeleton2Humanoid: Animating Simulated Characters for Physically-plausible Motion In-betweening
Shanghai Jiaotong University, ACMMM 2022, [Upcoming Code]

Conditional Motion In-betweening
Korea University, 6 Oct 2022, [Code]

SkeletonMAE: Spatial-Temporal Masked Autoencoders for Self-supervised Skeleton Action Recognition
University of North Carolina, 1 Sep 2022

A Unified Framework for Real Time Motion Completion
NetEase Games AI Lab, AAAI 2022

Transformer based Motion In-betweening
National Institute of Technology - Tiruchirappalli, ACCV 2022 Workshop, [Code]

Text-Image Generation

For more recent paper, you can find from here

Adding Conditional Control to Text-to-Image Diffusion Models
Stanford, Feb 2023

SpaText: Spatio-Textual Representation for Controllable Image Generation
Meta AI (FAIR), 25 Nov 2022

Sketch-Guided Text-to-Image Diffusion Models
Google Research, 24 Nov 2022

Make-A-Story: Visual Memory Conditioned Consistent Story Generation
University of British Columbia, 23 Nov 2022

Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
University of Waterloo, 20 Nov 2022, [Upcoming Code]

InstructPix2Pix: Learning to Follow Image Editing Instructions
UC Berkeley, 17 Nov 2022

Null-text Inversion for Editing Real Images using Guided Diffusion Models
Google Research, 17 Nov 2022

HumanDiffusion: a Coarse-to-Fine Alignment Diffusion Framework for Controllable Text-Driven Person Image Generation
University of Chinese Academy of Sciences, 11 Nov 2022

Imagic: Text-Based Real Image Editing with Diffusion Models
Google Research, 17 Oct 2022

Self-Guided Diffusion Models
University of Amsterdam, 12 Oct 2022

On Distillation of Guided Diffusion Models
Stanford University, NeurIPS 2022 Workshop, 6 Oct 2022

DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
Google Research, 25 Aug 2022, [Code]

Prompt-to-Prompt Image Editing with Cross Attention Control
Google Research, 2 Aug 2022, [Code]

Improved Vector Quantized Diffusion Models
University of Science and Technology of China, 31 May 2022, [Code]

Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
Meta AI Research, 24 Mar 2022

Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
Vidyasirimedhi Institute of Science and Technology, CVPR 2022 (Oral), [Code]

Vector Quantized Diffusion Model for Text-to-Image Synthesis
University of Science and Technology of China, CVPR 2022, [Code]

High-Resolution Image Synthesis with Latent Diffusion Models
Runway ML, CVPR 2022, [Code]

Text-Video Generation

TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models
Penn State University, CVPR 2024, [Code]

Conditional Image-to-Video Generation with Latent Flow Diffusion Models
Penn State University, CVPR 2023, [Code]

Text-To-4D Dynamic Scene Generation
Meta AI, 2023, [Code]

Structure and Content-Guided Video Synthesis with Diffusion Models
Runway, 6 Feb 2023

Latent Video Diffusion Models for High-Fidelity Video Generation with Arbitrary Lengths
The Hong Kong University of Science and Technology, 23 Nov 2022, [Upcoming Code]

MagicVideo: Efficient Video Generation With Latent Diffusion Models
ByteDance Inc, 20 Nov 2022

Text2LIVE: Text-Driven Layered Image and Video Editing
NVIDIA Research, ECCV 2022 (Oral), [Code]

Text-3D Image Generation

Point-E: A System for Generating 3D Point Clouds from Complex Prompts
OpenAI, 16 Dec 2022

DreamFusion: Text-to-3D using 2D Diffusion
Google Research, 29 Sep 2022

Digital Human

Table of Contents

Industry Demo or Product

Highavenue: Turn yourself into a 3D model.

3D Human Avatar Generation and Animation

RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models.
Bowen Zhang, Yiji Cheng, Chunyu Wang, Ting Zhang, Jiaolong Yang, Yansong Tang, Feng Zhao, Dong Chen, Baining Guo.
ECCV 2024. [PDF] [Project] [Code]

Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling.
Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu..
CVPR 2024. [PDF] [Project] [Code]

HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation.
Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang..
CVPR 2024. [PDF] [Project]

RAM-Avatar: Real-time Photo-Realistic Avatar from Monocular Videos with Full-body Control.
Xiang Deng, Zerong Zheng, Yuxiang Zhang, Jingxiang Sun, Chao Xu, XiaoDong Yang, Lizhen Wang, Yebin Liu.
CVPR 2024. [PDF] [Project] [Code]

TexVocab:Texture Vocabulary-conditioned Human Avatars.
Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang.
CVPR 2024. [PDF] [Project]

HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting.
Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, Ziwei Liu.
CVPR 2024. [PDF] [Project]

DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models.
Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, Kwan-Yee K. Wong.
CVPR 2024. [PDF] [Project] [Code]

SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes.
Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus Thies, Timo Bolkart.
CVPR 2024. [PDF] [Project]

3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting.
Zhiyin Qian, Shaofei Wang, Marko Mihajlovic, Andreas Geiger, Siyu Tang.
CVPR 2024. [PDF] [Project]

Emotional Speech-driven 3D Body Animation Via Disentangled Latent Diffusion.
Kiran Chhatre, Radek Daněček, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J. Black, Timo Bolkart.
CVPR 2024. [PDF]

GauHuman: Articulated Gaussian Splatting from Monocular Human Videos.
Shoukang Hu, Ziwei Liu.
CVPR 2024. [PDF] [Project] [Code]

FlashAvatar: High-Fidelity Digital Avatar Rendering at 300FPS.
Jun Xiang, Xuan Gao, Yudong Guo, Juyong Zhang.
CVPR 2024. [PDF] [Project]

PEGASUS: Personalized Generative 3D Avatars with Composable Attributes.
Hyunsoo Cha, Byungjun Kim, Hanbyul Joo.
CVPR 2024. [PDF] [Project] [Code]

TADA! Text to Animatable Digital Avatars.
Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxiang Tang, Yangyi Huang, Justus Thies, Michael J. Black.
3DV 2024. [PDF] [Project] [Code]

Efficient 3D Articulated Human Generation with Layered Surface Volumes.
Yinghao Xu, Wang Yifan, Alexander W. Bergman, Menglei Chai, Bolei Zhou, Gordon Wetzstein.
3DV 2024. [PDF] [Project]

TECA: Text-Guided Generation and Editing of Compositional 3D Avatars.
Hao Zhang, Yao Feng, Peter Kulits, Yandong Wen, Justus Thies, Michael J. Black.
3DV 2024. [PDF] [Project]

FLARE: Fast Learning of Animatable and Relightable Mesh Avatars.
Shrisha Bharadwaj, Yufeng Zheng, Otmar Hilliges, Michael J. Black, Victoria Fernandez-Abrevaya.
SIGGRAPH Asia 2023. [PDF]

Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization.
Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis.
SIGGRAPH 2023. [PDF] [Project] [Code]

DELIFFAS: Deformable Light Fields for Fast Avatar Synthesis.
Youngjoong Kwon, Lingjie Liu, Henry Fuchs, Marc Habermann, Christian Theobalt.
NeurIPS 2023. [PDF]

PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation.
Zhaoxi Chen, Fangzhou Hong, Haiyi Mei, Guangcong Wang, Lei Yang, Ziwei Liu.
NeurIPS 2023. [PDF] [Project] [Code]

XAGen: 3D Expressive Human Avatars Generation.
Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Jiashi Feng, Mike Zheng Shou.
NeurIPS 2023. [PDF] [Project] [Code]

DreamHuman: Animatable 3D Avatars from Text.
Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel Bazavan, Mihai Fieraru, Cristian Sminchisescu.
NeurIPS 2023. [PDF] [Project] [Avatar Gallery]

DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars.
David Svitov, Dmitrii Gudkov, Renat Bashirov, Victor Lempitsky.
ICCV 2023. [PDF] [Code]

AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control.
Ruixiang Jiang, Can Wang, Jingbo Zhang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao.
ICCV 2023. [PDF] [Project] [Code] [Data]

StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation.
Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang YU, Billzb Wang, Bin Fu, Tao Chen, Guosheng Lin, Chunhua Shen.
ICCV 2023. [PDF] [Project] [Code]

Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using Pixel-aligned Reconstruction Priors.
Zhangyang Xiong, Di Kang, Derong Jin, Weikai Chen, Linchao Bao, Xiaoguang Han.
ICCV 2023. [PDF] [Code]

GETAvatar: Generative Textured Meshes for Animatable Human Avatars.
Xuanmeng Zhang, Jianfeng Zhang, Rohan Chacko, Hongyi Xu, Guoxian Song, Yi Yang, Jiashi Feng.
ICCV 2023. [PDF] [Project]

AG3D: Learning to Generate 3D Avatars from 2D Image Collections.
Zijian Dong, Xu Chen, Jinlong Yang, Michael J. Black, Otmar Hilliges, Andreas Geiger.
ICCV 2023. [PDF] [Project] [Code]

Learning Locally Editable Virtual Humans.
Hsuan-I Ho, Lixin Xue, Jie Song, Otmar Hilliges.
CVPR 2023. [PDF] [Project] [Code]

PersonNeRF: Personalized Reconstruction from Photo Collections.
Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman.
CVPR 2023. [PDF] [Project] [Code]

Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures.
Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or.
CVPR 2023. [PDF] [Code]

EVA3D: Compositional 3D Human Generation from 2D Image Collections.
Fangzhou Hong, Zhaoxi Chen, Yushi Lan, Liang Pan, Ziwei Liu.
ICLR 2023. [PDF] [Project] [Code]

CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes.
Kim Youwang, Kim Ji-Yeon, Tae-Hyun Oh.
ECCV 2022. [PDF] [Project] [Code]

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars.
Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, Ziwei Liu.
SIGGRAPH (TOG) 2022. [PDF] [Project] [Code]

WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation.
Zihao Huang, ShouKang Hu, Guangcong Wang, Tianqi Liu, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu.
arXiv 2024. [PDF] [Project]

Drivable 3D Gaussian Avatars.
Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero.
arXiv 2023. [PDF] [Project]

MagicAvatar: Multimodal Avatar Generation and Animation.
Jianfeng Zhang, Hanshu Yan, Zhongcong Xu, Jiashi Feng, Jun Hao Liew.
arXiv 2023. [PDF] [Project]

DELTA: Learning Disentangled Avatars with Hybrid 3D Representations.
Yao Feng, Weiyang Liu, Timo Bolkart, Jinlong Yang, Marc Pollefeys, Michael J. Black.
arXiv 2023. [PDF] [Project] [Code]

AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation.
Yifei Zeng, Yuanxun Lu, Xinya Ji, Yao Yao, Hao Zhu, Xun Cao.
arXiv 2023. [PDF] [Project] [Code]

3D Head Animatable Avatar (from 2D Image Collections)

PAV: Personalized Head Avatar from Unstructured Video Collection.
Akin Caliskan, Berkay Kicanaoglu, Hyeongwoo Kim.
ECCV 2024. [PDF] [Project]

Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians.
Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu.
CVPR 2024. [PDF] [Project]

HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation.
Hongyu Liu, Xuan Wang, Ziyu Wan, Yujun Shen, Yibing Song, Jing Liao, Qifeng Chen.
SIGGRAPH 2024. [PDF] [Project]

GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar.
Berna Kabadayi, Wojciech Zielonka, Bharat Lal Bhatnagar, Gerard Pons-Moll, Justus Thies.
3DV 2024. [PDF] [Project]

NeRFEditor: Differentiable Style Decomposition for Full 3D Scene Editing.
Chunyi Sun, Yanbin Liu, Junlin Han, Stephen Gould.
WACV 2024. [PDF] [Project]

AlbedoGAN: Towards Realistic Generative 3D Face Models.
Aashish Rai, Hiresh Gupta, Ayush Pandey, Francisco Vicente Carrasco, Shingo Jason Takagi, Amaury Aubel, Daeil Kim, Aayush Prakash, Fernando de la Torre.
WACV 2024. [PDF] [Project] [Code]

AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars.
Mohit Mendiratta. Xingang Pan, Mohamed Elgharib, Kartik Teotia, Mallikarjun B R, Ayush Tewari, Vladislav Golyanik, Adam Kortylewski, Christian Theobalt.
TOG 2024. [PDF] [Project]

HQ3DAvatar: High Quality Controllable 3D Head Avatar.
Kartik Teotia, Mallikarjun B R, Xingang Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, Christian Theobalt.
TOG 20234. [PDF] [Project] [Code]

CLIPFace: Text-guided Editing of Textured 3D Morphable Models.
Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner.
SIGGRAPH 2023. [PDF] [Project] [Code]

DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance.
Longwen Zhang, Qiwei Qiu, Hongyang Lin, Qixuan Zhang, Cheng Shi, Wei Yang, Ye Shi, Sibei Yang, Lan Xu, Jingyi Yu.
SIGGRAPH 2023. [PDF] [Project] [Demo] [HuggingFace]

StyleAvatar: Real-time Photo-realistic Neural Portrait Avatar from a Single Video.
Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu✝, Yebin Liu.
SIGGRAPH 2023. [PDF]

LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar.
Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, Guojun Qi, Yebin Liu.
SIGGRAPH 2023. [PDF], [Project],[Code]

NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads.
Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, Matthias Nießner.
SIGGRAPH 2023. [PDF] [Project] [Video]

GOHA: Generalizable One-shot Neural Head Avatar.
Xueting Li, Shalini De Mello, Sifei Liu, Koki Nagano, Umar Iqbal, Jan Kautz.
NeurIPS 2023. [PDF] [Project]

GANHead: Towards Generative Animatable Neural Head Avatars.
[Sijing Wu]((https://wsj-sjtu.github.io), Yichao Yan, Yunhao Li, Yuhao Cheng, Wenhan Zhu, Ke Gao, Xiaobo Li, Guangtao Zhai.
CVPR 2023. [PDF] [Project]

FitMe: Deep Photorealistic 3D Morphable Model Avatars.
Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Baris Gecer, Jiankang Deng, Stefanos Zafeiriou.
CVPR 2023. [PDF] [Project]

Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars.
Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, Yebin Liu.
CVPR 2023 (Highlight). [PDF] [Project] [Code]

BlendFields: Few-Shot Example-Driven Facial Modeling.
Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski.
CVPR 2023. [PDF] [Project]

OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.
Zhiyuan Ma, Xiangyu Zhu, Guojun Qi, Zhen Lei, Lei Zhang.
CVPR 2023. [PDF] [Code] [Demo]

PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360∘.
Sizhe An, Hongyi Xu, Yichun Shi, Guoxian Song, Umit Ogras, Linjie Luo.
CVPR 2023. [PDF [Project]

Efficient Meshy Neural Fields for Animatable Human Avatars.
Xiaoke Huang, Yiji Cheng, Yansong Tang, Xiu Li, Jie Zhou, Jiwen Lu .
CVPR 2023. [PDF] [Project]

Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion.
Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo.
CVPR 2023. [PDF] [Project]

OmniAvatar: Geometry-Guided Controllable 3D Head Synthesis.
Hongyi Xu, Guoxian Song, Zihang Jiang, Jianfeng Zhang, Yichun Shi, Jing Liu, Wanchun Ma, Jiashi Feng, Linjie Luo.
CVPR 2023. [PDF]

PointAvatar: Deformable Point-based Head Avatars from Videos.
Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, Otmar Hilliges.
CVPR 2023. [PDF] [Project] [Code]

MEGANE: Morphable Eyeglass and Avatar Network.
Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, Jason Saragih.
CVPR 2023. [PDF] [Project]

Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video.
Xuan Gao, ChengLai Zhong, Jun Xiang, Yang Hong, Yudong Guo, Juyong Zhang.
TOG 2022. [PDF] [Project] [Code]

(Clothed) Human Motion Generation

TLControl: Trajectory and Language Control for Human Motion Synthesis.
Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu.
ECCV 2024. [PDF] [Project]

CoMo: Controllable Motion Generation through Language Guided Pose Code Editing.
Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu.
ECCV 2024. [PDF] [Project]

Total Selfie: Generating Full-Body Selfies.
Bowei Chen, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz.
CVPR 2024 (Highlight). [PDF] [Project]

OmniControl: Control Any Joint at Any Time for Human Motion Generation.
Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, Huaizu Jiang.
ICLR 2024. [PDF] [Project]

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model.
Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, Ziwei Liu.
TPAMI 2024. [PDF] [Project] [Code]

TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis.
Mathis Petrovich, Michael J. Black and Gül Varol.
ICCV 2023. [PDF] [Project] [Code]

SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation.
Nikos Athanasiou, Mathis Petrovich, Michael J. Black, Gül Varol.
ICCV 2023. [PDF] [Project] [Code]

HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion.
Mustafa Işık, Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner.
SIGGRAPH 2023. [PDF] [Project] [Code] [Data]

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents.
Tenglong Ao, Zeyi Zhang, Libin Liu.
SIGGRAPH 2023 (Journal Track). [PDF] [Project] [Code]

MDM: Human Motion Diffusion Model.
Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, Amit H. Bermano.
ICLR 2023. [PDF] [Project] [Code]

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis.
Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, Christian Theobalt.
CVPR 2023. [PDF] [Project]

MotionCLIP: Exposing Human Motion Generation to CLIP Space.
Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or.
ECCV 2022. [PDF] [Project] [Code]

TEMOS: Generating diverse human motions from textual descriptions.
Mathis Petrovich, Michael J. Black, Gül Varol.
ECCV 2022. [PDF] [Project] [Code]

TEACH: Temporal Action Composition for 3D Human.
Nikos Athanasiou, Mathis Petrovich, Michael J. Black, Gül Varol.
3DV 2022. [PDF] [Project] [Code]

Clothed Human Digitalization

Project Splinter: Human Digitalization with Implicit Representation.

PuzzleAvatar: Assembling 3D Avatars from Personal Albums.
Yuliang Xiu, Yufei Ye, Zhen Liu, Dimitrios Tzionas, Michael J. Black.
SIGGRAPH Asia (TOG) 2024. [PDF]

iHuman: Instant Animatable Digital Humans From Monocular Videos.
Pramish Paudel, Anubhav Khanal, Ajad Chhatkuli, Danda Pani Paudel, Jyoti Tandukar.
ECCV 2024. [PDF]

HiLo: Detailed and Robust 3D Clothed Human Reconstruction with High-and Low-Frequency Information of Parametric Models.
Yifan Yang, Dong Liu, Shuhai Zhang, Zeshuai Deng, Zixiong Huang, Mingkui Tan.
CVPR 2024. [PDF] [Github]

IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos Via Explicit Ray Tracing.
Shaofei Wang, Božidar Antić, Andreas Geiger, Siyu Tang.
CVPR 2024. [PDF] [Project]

GaussianAvatar: Towards Realistic Human Avatar Modeling from A Single Video Via Animatable 3D Gaussians.
Liangxiao Hu, Hongwen Zhang, Yuxiang Zhang, Boyao Zhou, Boning Liu, Shengping Zhang, Liqiang Nie.
CVPR 2024. [PDF] [Project] [Github]

SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion.
Hsuan-I Ho, Jie Song, Otmar Hilliges.
CVPR 2024. [PDF] [Project]

Recovering 3D Human Mesh from Monocular Images: A Survey.
Yating Tian, Hongwen Zhang, Yebin Liu, Limin Wang.
TPAMI 2023. [PDF] [Project] [Dataset] [Benchmarks]

Mirror-Aware Neural Humans.
Daniel Ajisafe, James Tang, Shih-Yang Su, Bastian Wandt, Helge Rhodin.
3DV 2024. [PDF] [Project]

TeCH: Text-guided Reconstruction of Lifelike Clothed Humans.
Yangyi Huang, Hongwei Yi, Yuliang Xiu, Tingting Liao, Jiaxiang Tang, Deng Cai, Justus Thies.
3DV 2024. [PDF] [Project]

Single-Image 3D Human Digitization with Shape-Guided Diffusion.
Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang.
SIGGRAPH Asia 2023. [PDF] [Project]

Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction.
Zechuan Zhang, Li Sun, Zongxin Yang, Ling Chen, Yi Yang.
NeurIPS 2023. [PDF]

ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns.
Ren Li, Benoît Guillard, Pascal Fua.
NeurIPS 2023. [PDF]

NCHO: Unsupervised Learning for Neural 3D Composition of Humans and Objects.
Taeksoo Kim, Shunsuke Saito, Hanbyul Joo.
ICCV 2023. [PDF] [Project]

Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models.
Byungjun Kim, Patrick Kwon, Kwangho Lee, Myunggi Lee, Sookwan Han, Daesik Kim, Hanbyul Joo.
ICCV 2023 (Oral). [PDF] [Project]

SHERF: Generalizable Human NeRF from a Single Image.
Shoukang Hu, Fangzhou Hong, Liang Pan, Haiyi Mei, Lei Yang, Ziwei Liu.
ICCV 2023. [PDF] [Project] [Code]

SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling.
Zhitao Yang, Zhongang Cai, Haiyi Mei, Shuai Liu, Zhaoxi Chen, Weiye Xiao, Yukun Wei, Zhongfei Qing, Chen Wei, Bo Dai, Wayne Wu, Chen Qian, Dahua Lin, Ziwei Liu, Lei Yang.
ICCV 2023. [PDF] [Project]

Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction.
Hyeongjin Nam, Daniel Sungho Jung, Yeonguk Oh, Kyoung Mu Lee.
ICCV 2023. [PDF]

HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion.
Mustafa Işık, Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner.
SIGGRAPH 2023. [PDF] [Project] [Code] [Data]

AvatarReX: Real-time Expressive Full-body Avatars.
Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu.
SIGGRAPH 2023 [PDF] [Project]

PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling.
Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu.
SIGGRAPH 2023 [PDF] [Project]

High-Fidelity Clothed Avatar Reconstruction from a Single Image.
Tingting Liao, Xiaomei Zhang, Yuliang Xiu, Hongwei Yi, Xudong Liu, Guo-Jun Qi, Yong Zhang, Xuan Wang, Xiangyu Zhu, Zhen Lei.
CVPR 2023. [PDF] [Code]

SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human Reconstruction.
Yukang Cao, Kai Han, Kenneth Kwan-Yee K. Wong.
CVPR 2023. [PDF] [Project] [Code]

Structured 3D Features for Reconstructing Relightable and Animatable Avatars.
Enric Corona, Mihai Zanfir, Thiemo Alldieck, Eduard Gabriel Bazavan, Andrei Zanfir, Cristian Sminchisescu.
CVPR 2023. [PDF] [Project]

Reconstructing Animatable Categories from Videos.
Gengshan Yang, Chaoyang Wang, N Dinesh Reddy, Deva Ramanan.
CVPR 2023. [PDF] [Project] [Code]

Representing Volumetric Videos as Dynamic MLP Maps.
Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou.
CVPR 2023. [PDF] [Project] [Code]

Learning Neural Volumetric Representations of Dynamic Humans in Minutes.
Chen Geng, Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou.
CVPR 2023. [PDF] [Project] [Code]

CloSET: Modeling Clothed Humans on Continuous Surface with Explicit Template Decomposition.
Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu.
CVPR 2023. [PDF] [Project]

MonoHuman: Animatable Human Neural Field from Monocular Video.
Zhengming Yu, Wei Cheng, Xian Liu, Wayne Wu, Kwan-Yee Lin.
CVPR 2023. [PDF] [Project]

FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views.
Vinoj Jayasundara, Amit Agrawal, Nicolas Heron, Abhinav Shrivastava, Larry S. Davis.
CVPR 2023. [PDF]

High-fidelity 3D Human Digitization from Single 2K Resolution Images.
Sang-Hun Han, Min-Gyu Park, Ju Hong Yoon, Ju-Mi Kang, Young-Jae Park, Hae-Gon Jeon.
CVPR 2023 (Highlight). [PDF] [Code]

Learning Neural Volumetric Representations of Dynamic Humans in Minutes.
Chen Geng, Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou.
CVPR 2023. [PDF] [Project]

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition.
Chen Guo, Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges.
CVPR 2023. [PDF] [Project]

Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion.
Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo.
CVPR 2023. [PDF] [Project]

ECON: Explicit Clothed humans Obtained from Normals.
Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, Michael J. Black.
CVPR 2023. [PDF] [Project] [Code]

X-Avatar: Expressive Human Avatars.
Kaiyue Shen, Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Julien Valentin, Jie Song, Otmar Hilliges.
CVPR 2023. [PDF] [Project] [Code]

InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds.
Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges.
CVPR 2022. [PDF] [Project] [Code]

Learning Visibility Field for Detailed 3D Human Reconstruction and Relighting.
Ruichen Zheng, Peng Li, Haoqian Wang, Tao Yu.
CVPR 2023. [PDF]

HumanGen: Generating Human Radiance Fields with Explicit Priors.
Suyi Jiang, Haoran Jiang, Ziyu Wang, Haimin Luo, Wenzheng Chen, Lan Xu.
CVPR 2023. [PDF]

SHARP: Shape-Aware Reconstruction of People In Loose Clothing.
Sai Sagar Jinka, Rohan Chacko, Astitva Srivastava, Avinash Sharma, P.J. Narayanan.
IJCV 2023. [PDF]

Geometry-aware Two-scale PIFu Representation for Human Reconstruction.
Zheng Dong, Ke Xu, Ziheng Duan, Hujun Bao, Weiwei Xu, Rynson W.H. Lau.
NeurIPS 2022. [PDF]

TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies.
Junting Dong, Qi Fang, Yudong Guo, Sida Peng, Qing Shuai, Hujun Bao, Xiaowei Zhou.
NeurIPS 2022. [PDF] [Project] [Data]

FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction.
Qiao Feng, Yebin Liu, Yu-Kun Lai, Jingyu Yang, Kun Li.
NeurIPS 2022. [PDF] [Project] [Code]

Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces Yihao Zhi, Shenhan Qian, Xinhao Yan, Shenghua Gao 3DV 2022. [PDF] [Code]

Neural Point-based Shape Modeling of Humans in Challenging Clothing.
Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang.
3DV 2022. [PDF]

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars.
Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker.
3DV 2022. [PDF] [Project]

Human Performance Modeling and Rendering via Neural Animated Mesh.
Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu.
SIGGRAPH Asia 2022. [PDF] [Project]

FloRen: Real-time High-quality Human Performance Rendering via Appearance Flow Using Sparse RGB Cameras.
Ruizhi Shao, Liliang Chen, Zerong Zheng, Hongwen Zhang, Yuxiang Zhang, Han Huang, Yandong Guo, Yebin Liu.
SIGGRAPH Asia 2022. [PDF]

Occupancy Planes for Single-view RGB-D Human Reconstruction.
Xiaoming Zhao, Yuan-Ting Hu, Zhongzheng Ren, Alexander G. Schwing.
AAAI 2023. [PDF] [Code]

HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling.
Zhongang Cai, Daxuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, Fangzhou Hong, Mingyuan Zhang, Chen Change Loy, Lei Yang, Ziwei Liu.
ECCV 2022 (Oral). [PDF] [Project]

Unsupervised Learning of Efficient Geometry-Aware Neural Articulated Representations.
Atsuhiro Noguchi, Xiao Sun, Stephen Lin, Tatsuya Harada.
ECCV 2022. [PDF] [Project] [Code]

NeuMan: Neural Human Radiance Field from a Single Video.
Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, Anurag Ranjan.
ECCV 2022. [PDF] [Code]

ARAH: Animatable Volume Rendering of Articulated Human SDFs.
Shaofei Wang, Katja Schwarz, Andreas Geiger, Siyu Tang.
ECCV 2022. [PDF] [Project] [Code]

DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras.
Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu.
ECCV (Oral) 2022. [PDF] [Code]

LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling.
Boyan Jiang, Xinlin Ren, Mingsong Dou, Xiangyang Xue, Yanwei Fu, Yinda Zhang.
ECCV 2022. [PDF] [Code]

Neural Capture of Animatable 3D Human from Monocular Video.
Gusi Te, Xiu Li, Xiao Li, Jinglu Wang, Wei Hu, Yan Lu.
ECCV 2022. [PDF]

The One Where They Reconstructed 3D Humans and Environments in TV Shows.
Georgios Pavlakos, Ethan Weber, Matthew Tancik, Angjoo Kanazawa.
ECCV 2022. [PDF] [Project]

UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation.
Shenhan Qian, Jiale Xu, Ziwei Liu, Liqian Ma, Shenghua Gao.
ECCV 2022. [PDF]

3D Clothed Human Reconstruction in the Wild.
Gyeongsik Moon, Hyeongjin Nam, Takaaki Shiratori, Kyoung Mu Lee.
ECCV 2022. [PDF] [Project]

NDF: Neural Deformable Fields for Dynamic Human Modelling.
Ruiqi Zhang, Jie Chen.
ECCV 2022. [PDF]

Animatable Volume Rendering of Articulated Human SDFs.
Shaofei Wang, Katja Schwarz, Andreas Geiger, Siyu Tang.
ECCV 2022. [PDF] [Project] [Project]

Learning Implicit Templates for Point-Based Clothed Human Modeling.
Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu.
ECCV 2022. [PDF] [Project] [Project]

DANBO: Disentangled Articulated Neural Body Representations via Graph Neural Networks.
Shih-Yang Su, Timur Bagautdinov, and Helge Rhodin.
ECCV 2022. [PDF] [Project] [Code]

Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural Human Rendering.
Mingfei Chen, Jianfeng Zhang, Xiangyu Xu, Lijuan Liu, Jiashi Feng, Shuicheng Yan.
ECCV 2022. [PDF]

AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture.
Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu.
ECCV 2022. [PDF] [Project]

Authentic Volumetric Avatars From a Phone Scan.
Chen Cao, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Lombardi, Shih-en Wei, Danielle Belko, Shoou-i Yu, Yaser Sheikh, Jason Saragih.
SIGGRAPH 2022. [PDF] [Project]

HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs.
Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu.
CVPR 2022. [PDF] [Project]

Photorealistic Monocular 3D Reconstruction of Humans Wearing Clothing.
Thiemo Alldieck, Mihai Zanfir, Cristian Sminchisescu.
CVPR 2022. [PDF] [Project]

HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video.
Chung-Yi Weng, Brian Curless, Pratul Srinivasan,Jonathan T. Barron, Ira Kemelmacher-Shlizerman.
CVPR 2022. [PDF] [Project]

H4D: Human 4D Modeling by Learning Neural Compositional Representation.
Boyan Jiang, Yinda Zhang, Xingkui Wei, Xiangyang Xue, Yanwei Fu.
CVPR 2022. [PDF]

OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction.
Wenbin Lin, Chengwei Zheng, Jun-Hai Yong, Feng Xu.
CVPR 2022. [PDF] [Project]

PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence.
Zijian Dong, Chen Guo, Jie Song, Xu Chen, Andreas Geiger, Otmar Hilliges.
CVPR 2022. [PDF] [Project]

SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video.
Boyi Jiang, Yang Hong, Hujun Bao, Juyong Zhang.
CVPR 2022. [PDF] [Project]

Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time.
Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu.
CVPR 2022. [PDF]

NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions.
Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu.
CVPR 2022. [PDF] [Project]

JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction.
Yukang Cao, Guanying Chen, Kai Han, Wenqi Yang, Kwan-Yee K. Wong.
CVPR 2022 (oral). [PDF] [Project]

High-Fidelity Human Avatars from a Single RGB Camera.
Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li.
CVPR 2022. [PDF] [Project] [Project] [Data]

ICON: Implicit Clothed humans Obtained from Normals.
Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black.
CVPR 2022. [PDF] [Code]

DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Rendering.
Ruizhi Shao, Hongwen Zhang, He Zhang, Yanpei Cao, Tao Yu, Yebin Liu.
CVPR 2022. [PDF] [Project]

Structured Local Radiance Fields for Human Avatar Modeling.
Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu.
CVPR 2022. [PDF] [Project] [Code]

DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering.
Ruizhi Shao, Hongwen Zhang, He Zhang, Mingjia Chen, Yanpei Cao, Tao Yu, Yebin Liu.
CVPR 2022. [PDF] [Project]

I M Avatar: Implicit Morphable Head Avatars from Videos.
Yufeng Zheng, Victoria Fernández Abrevaya, Xu Chen, Marcel C. Bühler, Michael J. Black, Otmar Hilliges.
CVPR 2022. [PDF]

Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis.
Tianhan Xu, Yasuhiro Fujita, Eiichi Matsumoto.
CVPR 2022. [PDF]

Neural Head Avatars from Monocular RGB Videos.
Philip-William Grassal, Malte Prinzler, Titus Leistner, Carsten Rother, Matthias Nießner, Justus Thies.
CVPR 2022. [PDF] [Project]

gDNA: Towards Generative Detailed Neural Avatars.
Xu Chen, Tianjian Jiang, Jie Song, Jinlong Yang, Michael J. Black, Andreas Geiger, Otmar Hilliges.
CVPR 2022. [PDF] [Project] [Code]

HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs.
Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu.
CVPR 2022. [PDF] [Project]

PERGAMO: Personalized 3D Garments from Monocular Video.
Andrés Casado-Elvira, Marc Comino Trinidad, Dan Casas.
CFG 2022. [PDF] [Project]

Cloth Modelling, Draping, Simulation, and Dressing

4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations.
Wenbo Wang, Hsuan-I Ho, Chen Guo, Boxiang Rong, Artur Grigorev, Jie Song, Juan Jose Zarate, Otmar Hilliges.
CVPR 2024 (Highlight). [PDF] [Project] [Data] [Code]

A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth Draping.
Hunor Laczkó, Meysam Madadi, Sergio Escalera, Jordi Gonzalez.
WACV 2024. [PDF]

Towards Multi-Layered 3D Garments Animation.
Yidi Shao, Chen Change Loy, Bo Dai.
ICCV 2023. [PDF] [Project]

REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos.
Lingteng Qiu, Guanying Chen, Jiapeng Zhou, Mutian Xu, Junle Wang, Xiaoguang Han.
CVPR 2023. [PDF] [Project] [Code]

HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics.
Artur Grigorev, Bernhard Thomaszewski, Michael J. Black, Otmar Hilliges.
CVPR 2023. [PDF] [Project] [Code]

Deep Deformation Detail Synthesis for Thin Shell Models.
Lan Chen, Lin Gao, Jie Yang, Shibiao Xu, Juntao Ye, Xiaopeng Zhang, Yu-Kun Lai.
CGF 2023. [PDF]

Motion Guided Deep Dynamic 3D Garments.
Meng Zhang, Duygu Ceylan, Niloy J. Mitra.
SIGGRAPH Asia 2022. [PDF] [Project]

Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks.
Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha.
SIGGRAPH 2022. [PDF] [Code]

DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact.
Yifei Li, Tao Du, Kui Wu, Jie Xu, Wojciech Matusik.
TOG 2022. [PDF] [Project] [Code]

DIG: Draping Implicit Garment over the Human Body.
Ren Li, Benoît Guillard, Edoardo Remelli, Pascal Fua.
ACCV 2022. [PDF]

SNUG: Self-Supervised Neural Dynamic Garments.
Igor Santesteban, Miguel A. Otaduy, Dan Casas.
CVPR 2022 (Oral). [PDF] [Project] [Code]

Registering Explicit to Implicit: Towards High-Fidelity Garment mesh Reconstruction from Single Images.
Heming Zhu, Lingteng Qiu, Yuda Qiu, Xiaoguang Han.
CVPR 2022. [PDF] [Project]

Human Image and Video Generation

Gaussian Shell Maps for Efficient 3D Human Generation.
Rameen Abdal, Wang Yifan, Zifan Shi, Yinghao Xu, Ryan Po, Zhengfei Kuang, Qifeng Chen, Dit-Yan Yeung, Gordon Wetzstein.
CVPR 2024. [PDF] [Project]

HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion.
Xian Liu, Jian Ren, Aliaksandr Siarohin, Ivan Skorokhodov, Yanyu Li, Dahua Lin, Xihui Liu, Ziwei Liu, Sergey Tulyakov.
ICLR 2024. [PDF] [Project]

VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis.
Xinya Chen, Jiaxin Huang, Yanrui Bin, Lu Yu, Yiyi Liao.
ICCV 2023. [PDF]

UnitedHuman: Harnessing Multi-Source Data for High-Resolution Human Generation.
Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Wayne Wu, Ziwei Liu.
ICCV 2023. [PDF] [Project] [Github]

Text2Performer: Text-Driven Human Video Generation.
Yuming Jiang, Shuai Yang, Tong Liang Koh, Wayne Wu, Chen Change Loy, Ziwei Liu.
ICCV 2023. [PDF] [Project] [Code]

Text-guided 3D Human Generation from 2D Collections.
Tsu-Jui Fu, Wenhan Xiong, Yixin Nie, Jingyu Liu, Barlas Oğuz, William Yang Wang.
EMNLP 2023 (Findings). [PDF] [Project]

Cross Attention Based Style Distribution for Controllable Person Image Synthesis.
Xinyue Zhou, Mingyu Yin, Xinyuan Chen, Li Sun, Changxin Gao, Qingli Li.
ECCV 2022. [PDF]

StyleGAN-Human: A Data-Centric Odyssey of Human Generation.
Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Chen Qian, Chen Change Loy, Wayne Wu, Ziwei Liu.
ECCV 2022. [PDF] [Code] [Project] [Colab Demo] [Hugging Face Demo]

Text2Human: Text-Driven Controllable Human Image Generation.
Yuming Jiang, Shuai Yang, Haonan Qiu, Wayne Wu, Chen Change Loy, Ziwei Liu.
SIGGRAPH 2022. [PDF] [Code] [Project]

Self-Supervised Correlation Mining Network for Person Image Generation.
Zijian Wang, Xingqun Qi, Kun Yuan, Muyi Sun.
CVPR 2022. [PDF]

BodyGAN: General-Purpose Controllable Neural Human Body Generation.
Chaojie Yang, Hanhui Li, Shengjie Wu, Shengkai Zhang, Haonan Yan, Nianhong Jiao, Jie Tang, Runnan Zhou, Xiaodan Liang, Tianxiang Zheng.
CVPR 2022. [PDF]

InsetGAN for Full-Body Image Generation.
Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu.
CVPR 2022. [PDF] [Project]

Neural Texture Extraction and Distribution for Controllable Person Image Synthesis.
Yurui Ren, Xiaoqing Fan, Ge Li, Shan Liu, Thomas H. Li.
CVPR 2022 (oral). [PDF] [Code]

Exploring Dual-task Correlation for Pose Guided Person Image Generation.
Pengze Zhang, Lingxiao Yang, Jianhuang Lai, Xiaohua Xie.
CVPR 2022. [PDF]

Image-Based Virtual Try-On

[Awesome Virtual Try-on (VTON)]

FashionTex: Controllable Virtual Try-on with Text and Texture.
Anran Lin, Nanxuan Zhao, Shuliang Ning, Yuda Qiu, Baoyuan Wang, Xiaoguang Han.
SIGGRAPH 2023. [PDF]

TryOnDiffusion: A Tale of Two UNets.
Luyang Zhu, Dawei Yang, Tyler Zhu, Fitsum Reda, William Chan, Chitwan Saharia, Mohammad Norouzi, Ira Kemelmacher-Shlizerman.
CVPR 2023. [PDF] [Project]

High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions.
Sangyun Lee, Gyojung Gu, Sunghyun Park, Seunghwan Choi, Jaegul Choo.
ECCV 2022. [PDF] [Project] [Code]

Single Stage Virtual Try-on via Deformable Attention Flows.
Shuai Bai, Huiling Zhou, Zhikang Li, Chang Zhou, Hongxia Yang.
ECCV 2022. [PDF]

MGN: A Regional Mask Guided Network for Parser-free Virtual Try-on.
Chao Lin, Zhao Li, Sheng Zhou, Shichang Hu, Jialun Zhang, Linhao Luo, Jiarun Zhang, Longtao Huang, Yuan He.
IJCAI 2022. [PDF]

ClothFormer: Taming Video Virtual Try-on in All Module.
Jianbin Jiang, Tan Wang, He Yan, Junhui Liu.
CVPR 2022 (oral). [PDF] [Project]

Style-Based Global Appearance Flow for Virtual Try-On.
Sen He, Yi-Zhe Song, Tao Xiang.
CVPR 2022. [PDF]

Dressing in the Wild by Watching Dance Videos.
Xin Dong, Fuwei Zhao, Zhenyu Xie, Xijin Zhang, Daniel K. Du, Min Zheng, Xiang Long, Xiaodan Liang, Jianchao Yang.
CVPR 2022. [PDF] [Project]

CIT: Cloth Interactive Transformer for Virtual Try-On.
Bin Ren, Hao Tang, Fanyang Meng, Runwei Ding, Ling Shao, Philip H.S. Torr, Nicu Sebe.
CVPR 2022. [PDF] [Code]

Weakly Supervised High-Fidelity Clothing Model Generation.
Ruili Feng, Cheng Ma, Chengji Shen, Xin Gao, Zhenjiang Liu, Xiaobo Li, Kairi Ou, Zhengjun Zha.
CVPR 2022. [PDF]

Human Body Reshaping

Structure-Aware Flow Generation for Human Body Reshaping.
Jianqiang Ren, Yuan Yao, Biwen Lei, Miaomiao Cui, Xuansong Xie.
CVPR 2022. [PDF] [Code]

Scene context-aware Human Body Generation

Putting People in Their Place: Affordance-Aware Human Insertion Into Scenes.
Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A. Efros, Krishna Kumar Singh.
CVPR 2023. [PDF] [Project]

Human Mesh Recovery

GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras.
Ye Yuan, Umar Iqbal, Pavlo Molchanov, Kris Kitani, Jan Kautz.
CVPR 2022 (Oral). [PDF] [Project] [Code]

Shapy: Accurate 3D Body Shape Regression Using Metric and Semantic Attributes.
Vasileios Choutas, Lea Muller, Chun-Hao P. Huang, Siyu Tang, Dimitrios Tzionas, Michael J. Black.
CVPR 2022. [PDF] [Project] [Code]

PoseScript: 3D Human Poses from Natural Language.
Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, Grégory Rogez.
ECCV 2022. [PDF] [Code]

Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception.
Fangzhou Hong, Liang Pan, Zhongang Cai, Ziwei Liu.
CVPR 2022. [PDF] [Code] [Project]

Datasets

Garment Design

Fashion Style Influences

Team and People

Dataset

  • SMPL. To download the SMPL-X, SMPL+H and SMPL (Male and Female, Gender Neural Model) model, go to this project website and register to get access to the downloads section. [Code]

  • THUmanDataset. THUman is a 3D real-world human model dataset containing approximately 7000 models.

  • AGORA. AGORA, proposed at CVPR 2021 paper, consists of 4240 scans spanning more than 350 unique subjects, all paired with SMPL-X fits.

LLMs Enhanced by Multimodal Generation and Editing

📋 Contents

💘 Tips

  • ✅ Paper searching via catatogue: directly clicking the content of the catatogue to select the area of your research and browse related papers.
  • ✅ Paper searching via author name: Free feel to search papers of a specific author via ctrl + F and then type the author name. The dropdown list of authors will automatically expand when searching.
  • ✅ Paper searching via tag: You can also search the related papers via the following tags: customization, iteractive, human motion generation tokenizer. (More tags are ongoing)

📍 Multimodal Generation

Image Generation

🔅 LLM-based

  • InstantUnify: Integrates Multimodal LLM into Diffusion Models (Aug 2024)

    Qixun Wang, Xu Bai, Rui Wang et al.Qixun Wang, Xu Bai, Rui Wang, Haofan Wang
    Code

  • Image Textualization: An Automatic Framework for Creating Accurate and Detailed Image Descriptions (11 June 2024)

    Renjie Pi, Jianshu Zhang, Jipeng Zhang et al. Renjie Pi, Jianshu Zhang, Jipeng Zhang, Rui Pan, Zhekai Chen, Tong Zhang
    Paper citation

  • T2S-GPT: Dynamic Vector Quantization for Autoregressive Sign Language Production from Text (11 June 2024)

    [ACL 2024] Aoxiong Yin, Haoyuan Li, Kai Shen et al. Aoxiong Yin, Haoyuan Li, Kai Shen, Siliang Tang, Yueting Zhuang
    Paper citation

  • Open-World Human-Object Interaction Detection via Multi-modal Prompts (11 June 2024)

    Jie Yang, Bingliang Li, Ailing Zeng et al.Jie Yang, Bingliang Li, Ailing Zeng, Lei Zhang, Ruimao Zhang
    Paper citation

  • Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? (11 June 2024)

    Xingyu Fu, Muyu He, Yujie Lu et al.Xingyu Fu, Muyu He, Yujie Lu, William Yang Wang, Dan Roth
    Paper citation

  • An Image is Worth 32 Tokens for Reconstruction and Generation (11 June 2024)

    Qihang Yu, Mark Weber, Xueqing Deng et al. Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, Liang-Chieh Chen
    Paper citation

  • TRINS: Towards Multimodal Language Models that Can Read (10 June 2024)

    [CVPR 2024] Ruiyi Zhang, Yanzhe Zhang, Jian Chen et al. Ruiyi Zhang, Yanzhe Zhang, Jian Chen, Yufan Zhou, Jiuxiang Gu, Changyou Chen, Tong Sun
    Paper citation

  • Chameleon: Mixed-Modal Early-Fusion Foundation Models (16 May 2024)

    Chameleon Team
    Paper citation

  • Graphic Design with Large Multimodal Model (22 Apr 2024)

    Yutao Cheng, Zhao Zhang, Maoke Yang, et al. Yutao Cheng, Zhao Zhang, Maoke Yang, Hui Nie, Chunyuan Li, Xinglong Wu, and Jie Shao
    Paper citation Code

  • PMG : Personalized Multimodal Generation with Large Language Models (7 Apr 2024)

    Xiaoteng Shen, Rui Zhang, Xiaoyan Zhao, et al.Xiaoteng Shen, Rui Zhang, Xiaoyan Zhao, Jieming Zhu, Xi Xiao
    Paper citation

  • MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control (19 Mar 2024)

    Enshen Zhou, Yiran Qin, Zhenfei Yin, et al.Enshen Zhou, Yiran Qin, Zhenfei Yin, Yuzhou Huang, Ruimao Zhang, Lu Sheng, Yu Qiao, Jing Shao
    Paper citation Code Project_Page

  • ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment (8 Mar 2024)

    Xiwei Hu, Rui Wang, Yixiao Fang, et al. Xiwei Hu, Rui Wang, Yixiao Fang, Bin Fu, Pei Cheng, Gang Yu
    Paper citation Code Project_Page

  • StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis (30 Jan 2024)

    Zecheng Tang, Chenfei Wu, Zekai Zhang, et al.Zecheng Tang, Chenfei Wu, Zekai Zhang, Mingheng Ni, Shengming Yin, Yu Liu, Zhengyuan Yang, Lijuan Wang, Zicheng Liu, Juntao Li, Nan Duan
    Paper citation tokenizer

  • DiffusionGPT: LLM-Driven Text-to-Image Generation System (18 Jan 2024)

    Jie Qin, Jie Wu, Weifeng Chen, et al. Jie Qin, Jie Wu, Weifeng Chen, Yuxi Ren, Huixia Li, Hefeng Wu, Xuefeng Xiao, Rui Wang, Shilei Wen
    Paper citation Code

  • StarVector: Generating Scalable Vector Graphics Code from Images (17 Dec 2023)

    Juan A. Rodriguez, Shubham Agarwal, Issam H. Laradji, et al. Juan A. Rodriguez, Shubham Agarwal, Issam H. Laradji, Pau Rodriguez, David Vazquez, Christopher Pal, Marco Pedersoli
    Paper citation Code

  • VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation (14 Dec 2023)

    Jinguo Zhu, Xiaohan Ding, Yixiao Ge, et al. Jinguo Zhu, Xiaohan Ding, Yixiao Ge, Yuying Ge, Sijie Zhao, Hengshuang Zhao, Xiaohua Wang, Ying Shan
    Paper citation Code

  • StoryGPT-V: Large Language Models as Consistent Story Visualizers (13 Dec 2023)

    Xiaoqian Shen, Mohamed Elhoseiny Xiaoqian Shen, Mohamed Elhoseiny
    Paper citation

  • GENIXER: Empowering Multimodal Large Language Models as a Powerful Data Generator (11 Dec 2023)

    Henry Hengyuan Zhao, Pan Zhou, Mike Zheng Shou Henry Hengyuan Zhao, Pan Zhou, Mike Zheng Shou
    Paper citation

  • Customization Assistant for Text-to-image Generation (5 Dec 2023)

    Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, et al. Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun
    Paper citationcustomization

  • ChatIllusion: Efficient-Aligning Interleaved Generation ability with Visual Instruction Model (29 Nov 2023)

    Xiaowei Chi, Yijiang Liu, Zhengkai Jiang, et al. Xiaowei Chi, Yijiang Liu, Zhengkai Jiang, Rongyu Zhang, Ziyi Lin, Renrui Zhang, Peng Gao, Chaoyou Fu, Shanghang Zhang, Qifeng Liu, Yike Guo
    Paper citation Code

  • DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback (29 Nov 2023)

    Jiao Sun, Deqing Fu, Yushi Hu, et al.Jiao Sun, Deqing Fu, Yushi Hu, Su Wang, Royi Rassin, Da-Cheng Juan, Dana Alon, Charles Herrmann, Sjoerd van Steenkiste, Ranjay Krishna, Cyrus Rashtchian
    Paper citation

  • COLE: A Hierarchical Generation Framework for Graphic Design (28 Nov 2023)

    Peidong Jia, Chenxuan Li, Zeyu Liu, et al.Peidong Jia, Chenxuan Li, Zeyu Liu, Yichao Shen, Xingru Chen, Yuhui Yuan, Yinglin Zheng, Dong Chen, Ji Li, Xiaodong Xie, Shanghang Zhang, Baining Guo
    Paper citation Project_Page

  • TextDiffuser-2: Unleashing the Power of Language Models for Text Rendering (28 Nov 2023)

    Jingye Chen, Yupan Huang, Tengchao Lv, et al.Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei
    Paper citation Project_Page Code Demo

  • LLMGA: Multimodal Large Language Model based Generation Assistant (27 Nov 2023)

    Bin Xia, Shiyin Wang, Yingfan Tao, et al. Bin Xia, Shiyin Wang, Yingfan Tao, Yitong Wang, Jiaya Jia
    Paper citation Code Project_Page

  • Self-correcting LLM-controlled Diffusion Models (27 Nov 2023)

    Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, et al. Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, Boyi Li, Trevor Darrell
    Paper citation Code

  • Tokenize and Embed ALL for Multi-modal Large Language Models (8 Nov 2023)

    Zhen Yang, Yingxue Zhang, Fandong Meng, et al. Zhen Yang, Yingxue Zhang, Fandong Meng, Jie Zhou
    Paper citation tokenizer

  • WordArt Designer: User-Driven Artistic Typography Synthesis using Large Language Models (20 Oct 2023)

    Jun-Yan He, Zhi-Qi Cheng, Chenyang Li, et al. Jun-Yan He, Zhi-Qi Cheng, Chenyang Li, Jingdong Sun, Wangmeng Xiang, Xianhui Lin, Xiaoyang Kang, Zengke Jin, Yusen Hu, Bin Luo, Yifeng Geng, Xuansong Xie, Jingren Zhou
    Paper citation

  • LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts (16 Oct 2023)

    [ICLR 2024] Hanan Gani, Shariq Farooq Bhat, Muzammal Naseer, et al.Hanan Gani, Shariq Farooq Bhat, Muzammal Naseer, Salman Khan, Peter Wonka
    Paper citation Code

  • Making Multimodal Generation Easier: When Diffusion Models Meet LLMs (13 Oct 2023)

    Xiangyu Zhao, Bo Liu, Qijiong Liu, et al.Xiangyu Zhao, Bo Liu, Qijiong Liu, Guangyuan Shi, Xiao-Ming Wu
    Paper citation Code

  • Idea2Img: Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation (12 Oct 2023)

    Zhengyuan Yang, Jianfeng Wang, Linjie Li, et al.Zhengyuan Yang, Jianfeng Wang, Linjie Li, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Lijuan Wang
    Paper citation Project_Page Code

  • OpenLEAF: Open-Domain Interleaved Image-Text Generation and Evaluation (11 Oct 2023)

    Jie An, Zhengyuan Yang, Linjie Li, et al.Jie An, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Lijuan Wang, Jiebo Luo
    Paper citation

  • Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models (11 Oct 2023)

    Zeqiang Lai, Xizhou Zhu, Jifeng Dai, et al.Zeqiang Lai, Xizhou Zhu, Jifeng Dai, Yu Qiao, Wenhai Wang
    Paper citation Project_Page Code

  • [DALL-E 3] Improving Image Generation with Better Captions

    James Betker, Gabriel Goh, Li Jing, et al.James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxin Jiao, Aditya Ramesh
    Paper citation Project_Page

  • MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens (3 Oct 2023)
    Kaizhi Zheng, Xuehai He, Xin Eric Wang.
    Paper citation Project_Page Code

  • Making LLaMA SEE and Draw with SEED Tokenizer (2 Oct 2023)

    Yuying Ge, Sijie Zhao, Ziyun Zeng, et al.Yuying Ge, Sijie Zhao, Ziyun Zeng, Yixiao Ge, Chen Li, Xintao Wang, Ying Shan
    Paper citation Project_Page Code Demo tokenizer

  • InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists (30 Sep 2023)

    Yulu Gan, Sungwoo Park, Alexander Schubert, et al.Yulu Gan, Sungwoo Park, Alexander Schubert, Anthony Philippakis, Ahmed M. Alaa
    Paper citation Code Demo

  • InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition (26 Sep 2023)

    Pan Zhang, Xiaoyi Dong, Bin Wang, et al. Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Haodong Duan, Songyang Zhang, Shuangrui Ding, Wenwei Zhang, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, Jiaqi Wang
    Paper citation Code

  • Text-to-Image Generation for Abstract Concepts (26 Sep 2023)

    Jiayi Liao, Xu Chen, Qiang Fu, et al.Jiayi Liao, Xu Chen, Qiang Fu, Lun Du, Xiangnan He, Xiang Wang, Shi Han, Dongmei Zhang
    Paper citation

  • DreamLLM: Synergistic Multimodal Comprehension and Creation (20 Sep 2023)

    [ICLR 2024] Runpei Dong, Chunrui Han, Yuang Peng, et al.Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, Hongyu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi
    Paper citation Project_Page Code

  • SwitchGPT: Adapting Large Language Models for Non-Text Outputs (14 Sep 2023)
    Wang, Xinyu, Bohan Zhuang, and Qi Wu.
    Paper citation Code

  • NExT-GPT: Any-to-Any Multimodal LLM (11 Sep 2023)

    Shengqiong Wu, Hao Fei, Leigang Qu, et al.Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua
    Paper citation Project_Page Code Demo

  • LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation (9 Aug 2023)

    Leigang Qu, Shengqiong Wu, Hao Fei, et al. ACM MM 2023Leigang Qu, Shengqiong Wu, Hao Fei, Liqiang Nie, Tat-Seng Chua
    Paper citation Project_Page Code

  • Planting a SEED of Vision in Large Language Model (16 Jul 2023)

    Yuying Ge, Yixiao Ge, Ziyun Zeng, et al.Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, Ying Shan
    Paper citation Project_Page Code

  • Generative Pretraining in Multimodality (11 Jul 2023)

    Quan Sun, Qiying Yu, Yufeng Cui, et al.Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, Xinlong Wang
    Paper citation Code Demo

  • SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs (30 Jun 2023)

    [NeurIPS 2023 Spotlight] Lijun Yu, Yong Cheng, Zhiruo Wang, et al.Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David A. Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, Kevin Murphy, Alexander G. Hauptmann, Lu Jiang
    Paper citation

  • Controllable Text-to-Image Generation with GPT-4 (29 May 2023)

    Tianjun Zhang, Yi Zhang, Vibhav Vineet, et al.Tianjun Zhang, Yi Zhang, Vibhav Vineet, Neel Joshi, Xin Wang
    Paper citation Project_Page

  • Generating Images with Multimodal Language Models (26 May 2023)
    [NeurIPS 2023] Koh, Jing Yu, Daniel Fried, and Ruslan Salakhutdinov.
    Paper citation Project_Page Code

  • LayoutGPT: Compositional Visual Planning and Generation with Large Language Models (24 May 2023)

    [NeurIPS 2023] Weixi Feng, Wanrong Zhu, Tsu-jui Fu, et al.Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, William Yang Wang
    Paper citation Project_Page Code

  • Visual Programming for Text-to-Image Generation and Evaluation (24 May 2023)
    [NeurIPS 2023] Jaemin Cho, Abhay Zala, Mohit Bansal.
    Paper citation Project_Page Code

  • LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models (23 May 2023)

    Long Lian, Boyi Li, Adam Yala, et al.Long Lian, Boyi Li, Adam Yala, Trevor Darrell
    Paper citation Project_Page Code

  • Interactive Data Synthesis for Systematic Vision Adaptation via LLMs-AIGCs Collaboration (22 May 2023)

    Qifan Yu, Juncheng Li, Wentao Ye, et al.Qifan Yu, Juncheng Li, Wentao Ye, Siliang Tang, Yueting Zhuang
    Paper citation Code

  • LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation (18 May 2023)

    [NeurIPS 2023] Yujie Lu, Xianjun Yang, Xiujun Li, et al.Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, William Yang Wang
    Paper citation Code

  • SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models (9 May 2023)

    [ACM MM 2023] Shanshan Zhong, Zhongzhan Huang, Wushao Wen, et al.Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin
    Paper Code

  • Grounding Language Models to Images for Multimodal Inputs and Outputs (31 Jan 2023)
    [ICML 2023] Koh, Jing Yu, Ruslan Salakhutdinov, and Daniel Fried.
    Paper citation Project_Page Code

  • [RPG-DiffusionMaster] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (22 Jan 2024)

    [ICML 2024] Ling Yang, Zhaochen Yu, Chenlin Meng, et al.Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, Bin Cui
    Paper citation Code

  • RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models (20 Feb 2024)

    Xinchen Zhang, Ling Yang, Yaqi Cai, et al.Xinchen Zhang, Ling Yang, Yaqi Cai, Zhaochen Yu, Kai-Ni Wang, Jiake Xie, Ye Tian, Minkai Xu, Yong Tang, Yujiu Yang, Bin Cui
    Paper citation Project_Page Code

Non-LLM-based (Clip/T5)

  • InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation (3 Apr 2024)

    Haofan Wang, Matteo Spinelli, Qixun Wang, et al.Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, Anthony Chen
    Paper citation Project_Page Code

  • InstantID: Zero-shot Identity-Preserving Generation in Seconds (15 Jan 2024)

    Qixun Wang, Xu Bai, Haofan Wang, et al.Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, Yao Hu
    Paper citation Project_Page Code

  • PIXART-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis (30 Sep 2023)

    [ICLR 2024] Junsong Chen, Jincheng Yu, Chongjian Ge, et al.Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, Zhenguo Li
    Paper citation Project_Page Code Demo

  • TextDiffuser: Diffusion Models as Text Painters (18 May 2023)

    [NeurIPS 2023] Jingye Chen, Yupan Huang, Tengchao Lv, et al.Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei
    Paper citation Project_Page Code Demo

  • TiGAN: Text-Based Interactive Image Generation and Manipulation (Dec 2022)

    [AAAI 2022] Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, et al.Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Chris Tensmeyer, Tong Yu,Changyou Chen, Jinhui Xu, Tong Sun
    Paper citation Tags: iteractive

  • Multi-Concept Customization of Text-to-Image Diffusion (8 Dec 2022)

    [CVPR 2023] Nupur Kumari, Bingliang Zhang, Richard Zhang, et al.Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, Jun-Yan Zhu
    Paper citation Project_Page Code
    Tags: customization

  • DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation (25 Aug 2022)

    [CVPR 2023] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, et al.Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman
    Paper citation Project_Page
    Tags: customization

  • An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion (2 Aug 2022)

    Rinon Gal, Yuval Alaluf, Yuval Atzmon, et al. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or
    Paper citation Project_Page Code
    Tags: customization

  • Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (23 May 2022)
    [NeurIPS 2022]

    Saharia, Chitwan Chan, William Saxena, Saurabh Li, Lala Whang, Jay Denton, Emily L Ghasemipour, Kamyar Gontijo Lopes, Raphael Karagol Ayan, Burcu Salimans, Tim others
    Paper citation Project_Page

Datasets

  • MIMIC-IT: Multi-Modal In-Context Instruction Tuning (8 Jun 2023)

    [NeurIPS 2023] Bo Li, Yuanhan Zhang, Liangyu Chen, et al.Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, Ziwei Liu
    Paper citation Code

  • [LAION-Glyph] GlyphControl: Glyph Conditional Control for Visual Text Generation (29 May 2023)

    [NeurIPS 2023] Yukang Yang, Dongnan Gui, Yuhui Yuan, et al.Yukang Yang, Dongnan Gui, Yuhui Yuan, Weicong Liang, Haisong Ding, Han Hu, Kai Chen
    Paper citation Code

  • [MARIO-10M] TextDiffuser: Diffusion Models as Text Painters (18 May 2023)

    [NeurIPS 2023] Jingye Chen, Yupan Huang, Tengchao Lv, et al.Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei
    Paper citation Project_Page Code

  • DataComp: In search of the next generation of multimodal datasets (27 Apr 2023)

    [NeurIPS 2023] Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, et al.Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt
    Paper citation Project_Page Code

  • [LLava-instruct] Visual Instruction Tuning (17 Apr 2023)

    [NeurIPS 2023] Haotian Liu, Chunyuan Li, Qingyang Wu, et al.Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
    Paper citation Project_Page Code

  • Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text (14 Apr 2023)

    [NeurIPS 2023] Wanrong Zhu, Jack Hessel, Anas Awadalla, et al.Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, Yejin Choi
    Paper citation Code

  • Language Is Not All You Need: Aligning Perception with Language Models (27 Feb 2023)

    [NeurIPS 2023] Shaohan Huang, Li Dong, Wenhui Wang, et al.Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei
    Paper citation

  • COYO-700M: Image-Text Pair Dataset (31 Aug 2022)
    Code

  • LAION-5B: An open large-scale dataset for training next generation image-text models (16 Oct 2022)

    [NeurIPS 2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, et al. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev
    Paper citation Project_Page

  • LAION COCO: 600M SYNTHETIC CAPTIONS FROM LAION2B-EN (15 Sep 2022)

    Christoph Schuhmann, Andreas Köpf , Theo Coombes, et al.Christoph Schuhmann, Andreas Köpf , Theo Coombes, Richard Vencu, Benjamin Trom , Romain Beaumont
    Project_Page

  • [M3W] Flamingo: a Visual Language Model for Few-Shot Learning (29 Apr 2022)

    [NeurIPS 2022] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, et al.Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan
    Paper citation

Video Generation

🔅 LLM-based

  • Compositional 3D-aware Video Generation with LLM Director (31 Aug 2024)

    Hanxin Zhu, Tianyu He, Anni Tang, et al.Hanxin Zhu, Tianyu He, Anni Tang, Junliang Guo, Zhibo Chen, Jiang Bian
    Paper Project_Page

  • Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation (19 Aug 2024)

    [SIGGRAPH Asia 2024] Yunxin Li, Haoyuan Shi, Baotian Hu, et al.Yunxin Li, Haoyuan Shi, Baotian Hu, Longyue Wang, Jiashun Zhu, Jinyi Xu, Zhen Zhao, Min Zhang
    Paper Code

  • [BSQ-ViT] Image and Video Tokenization with Binary Spherical Quantization (11 Jun 2024)
    [Tech Report]Yue Zhao, Yuanjun Xiong, Philipp Krähenbühl
    Paper tokenizer

  • DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation (11 Mar 2024)

    Guosheng Zhao, Xiaofeng Wang, Zheng Zhu, et al.Guosheng Zhao, Xiaofeng Wang, Zheng Zhu, Xinze Chen, Guan Huang, Xiaoyi Bao, Xingang Wang
    Paper citation Project_Page

  • [Sora] Video generation models as world simulators (15 Feb 2024)

    Tim Brooks, Bill Peebles, Connor Holmes, et al.Tim Brooks and Bill Peebles and Connor Holmes and Will DePue and Yufei Guo and Li Jing and David Schnurr and Joe Taylor and Troy Luhman and Eric Luhman and Clarence Ng and Ricky Wang and Aditya Ramesh
    Paper

  • [LGVI] Towards Language-Driven Video Inpainting via Multimodal Large Language Models (18 Jan 2024)

    Jianzong Wu, Xiangtai Li, Chenyang Si, et al.Jianzong Wu, Xiangtai Li, Chenyang Si, Shangchen Zhou, Jingkang Yang, Jiangning Zhang, Yining Li, Kai Chen, Yunhai Tong, Ziwei Liu, Chen Change Loy
    Paper citation Project_Page

  • Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization: Content-Consistent Multi-Scene Video Generation with LLM (2 Jan 2024)

    Yang Jin, Zhicheng Sun, Kun Xu, et al.Yang Jin, Zhicheng Sun, Kun Xu, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, Yadong Mu
    Paper citation Project_Page tokenizer

  • VideoDrafter: Content-Consistent Multi-Scene Video Generation with LLM (2 Jan 2024)

    Fuchen Long, Zhaofan Qiu, Ting Yao, et al.Fuchen Long, Zhaofan Qiu, Ting Yao, Tao Mei
    Paper citation Project_Page

  • [PRO-Motion] Plan, Posture and Go: Towards Open-World Text-to-Motion Generation (22 Dec 2023)

    Jinpeng Liu, Wenxun Dai, Chunyu Wang, et al.Jinpeng Liu, Wenxun Dai, Chunyu Wang, Yiji Cheng, Yansong Tang, Xin Tong
    Paper citation Project_Page

  • VideoPoet: A Large Language Model for Zero-Shot Video Generation (21 Dec 2023)

    Dan Kondratyuk, Lijun Yu, Xiuye Gu, et al.Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Rachel Hornung, Hartwig Adam, Hassan Akbari, Yair Alon, Vighnesh Birodkar, Yong Cheng, Ming-Chang Chiu, Josh Dillon, Irfan Essa, Agrim Gupta, Meera Hahn, Anja Hauth, David Hendon, Alonso Martinez, David Minnen, David Ross, Grant Schindler, Mikhail Sirotenko, Kihyuk Sohn, Krishna Somandepalli, Huisheng Wang, Jimmy Yan, Ming-Hsuan Yang, Xuan Yang, Bryan Seybold, Lu Jiang
    Paper citation Project_Page

  • FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax (27 Nov 2023)

    [arXiv 2023] Yu Lu, Linchao Zhu, Hehe Fan, et al.Yu Lu, Linchao Zhu, Hehe Fan, Yi Yang
    Paper citation

  • InterControl: Generate Human Motion Interactions by Controlling Every Joint (27 Nov 2023)

    Zhenzhi Wang, Jingbo Wang, Dahua Lin, et al.Zhenzhi Wang, Jingbo Wang, Dahua Lin, Bo Dai
    Paper citation Code
    Tags: human motion generation

  • MotionLLM: Multimodal Motion-Language Learning with Large Language Models (27 May 2024)

    Qi Wu, Yubo Zhao, Yifan Wang, et al.Qi Wu, Yubo Zhao, Yifan Wang, Yu-Wing Tai, Chi-Keung Tang
    Paper citation Project_Page
    Tags: general human motion generation

  • GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via Blender-Oriented GPT Planning (21 Nov 2023)

    Jiaxi Lv, Yi Huang, Mingfu Yan, et al.Jiaxi Lv, Yi Huang, Mingfu Yan, Jiancheng Huang, Jianzhuang Liu, Yifan Liu, Yafei Wen, Xiaoxin Chen, Shifeng Chen
    Paper citation Project_Page

  • [MAGVIT-v2] Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation (9 Oct 2023)

    Lijun Yu, José Lezama, Nitesh B. Gundavarapu, et al.Lijun Yu, José Lezama, Nitesh B. Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong Cheng, Agrim Gupta, Xiuye Gu, Alexander G. Hauptmann, Boqing Gong, Ming-Hsuan Yang, Irfan Essa, David A. Ross, Lu Jiang
    Paper citation tokenizer

  • [LVD] LLM-grounded Video Diffusion Models (29 Sep 2023)

    Long Lian, Baifeng Shi, Adam Yala, et al.Long Lian, Baifeng Shi, Adam Yala, Trevor Darrell, Boyi Li
    Paper citation Project_Page Code

  • VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (26 Sep 2023)

    [arXiv 2023] Han Lin, Abhay Zala, Jaemin Cho, et al.Han Lin, Abhay Zala, Jaemin Cho, Mohit Bansal
    Paper citation Project_Page Code

  • Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator (25 Sep 2023)

    [NIPS 2023] Hanzhuo Huang, Yufan Feng, Cheng Shi, et al.Hanzhuo Huang, Yufan Feng, Cheng Shi, Lan Xu, Jingyi Yu, Sibei Yang
    Paper citation Code

  • [Dysen-VDM] Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models (26 Aug 2023)

    [CVPR 2024] Hao Fei, Shengqiong Wu, Wei Ji, et al.Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Tat-Seng Chua
    Paper citation Project_Page Code

  • [DirecT2V] Large Language Models are Frame-level Directors for Zero-shot Text-to-Video Generation (23 May 2023)

    [arXiv 2023] Susung Hong, Junyoung Seo, Sunghwan Hong, et al.Susung Hong, Junyoung Seo, Sunghwan Hong, Heeseong Shin, Seungryong Kim
    Paper citation Code

  • Text2Motion: From Natural Language Instructions to Feasible Plans (21 Mar 2023)

    [Autonomous Robots 2023] Kevin Lin, Christopher Agia, Toki Migimatsu, et al.Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, Jeannette Bohg
    Paper citation Project_Page Code

Non-LLM-based

  • OSV: One Step is Enough for High-Quality Image to Video Generation (17 Sep 2024)

    Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang, et al.Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang, Wenbing Zhu, Jiangning Zhang, Hao Chen, Mingmin Chi, Yabiao Wang
    Paper

  • [PAB] Real-Time Video Generation with Pyramid Attention Broadcast (26 Jun 2024)

    Xuanlei Zhao, Xiaolong Jin, Kai Wang, et al.Xuanlei Zhao, Xiaolong Jin, Kai Wang, Yang You
    Project_Page Code

  • Video-Infinity: Distributed Long Video Generation (24 Jun 2024)

    Zhenxiong Tan, Xingyi Yang, Songhua Liu, et al.Zhenxiong Tan, Xingyi Yang, Songhua Liu, Xinchao Wang
    Paper

  • Pandora: Towards General World Model with Natural Language Actions and Video (12 Jun 2024)

    Jiannan Xiang, Guangyi Liu, Yi Gu, et al.Jiannan Xiang, Guangyi Liu, Yi Gu, Qiyue Gao, Yuting Ning, Yuheng Zha, Zeyu Feng, Tianhua Tao, Shibo Hao, Yemin Shi, Zhengzhong Liu, Eric P. Xing, Zhiting Hu
    Paper Project_Page Code

  • Text-Animator: Controllable Visual Text Video Generation (25 Jun 2024)

    Lin Liu, Quande Liu, Shengju Qian, et al.Lin Liu, Quande Liu, Shengju Qian, Yuan Zhou, Wengang Zhou, Houqiang Li, Lingxi Xie, Qi Tian
    Paper Project_Page

  • MotionBooth: Motion-Aware Customized Text-to-Video Generation (25 Jun 2024)

    Jianzong Wu, Xiangtai Li, Yanhong Zeng, et al.Jianzong Wu, Xiangtai Li, Yanhong Zeng, Jiangning Zhang, Qianyu Zhou, Yining Li, Yunhai Tong, Kai Chen
    Paper Project_Page citation

  • FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models (24 Jun 2024)

    Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, et al.Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, Ziwei Liu
    Paper Project_Page citation Code

  • Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model (22 Jun 2024)

    Min Zhao, Hongzhou Zhu, Chendong Xiang, et al.Min Zhao, Hongzhou Zhu, Chendong Xiang, Kaiwen Zheng, Chongxuan Li, Jun Zhu
    Paper Project_Page citation Code

  • Image Conductor: Precision Control for Interactive Video Synthesis (21 Jun 2024)

    Yaowei Li, Xintao Wang, Zhaoyang Zhang, et al.Yaowei Li, Xintao Wang, Zhaoyang Zhang, Zhouxia Wang, Ziyang Yuan, Liangbin Xie, Yuexian Zou, Ying Shan
    Paper Project_Page citation Code

  • VIDEOSCORE: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation (21 Jun 2024)

    Xuan He, Dongfu Jiang, Ge Zhang, et al.Xuan He, Dongfu Jiang, Ge Zhang, Max Ku, Achint Soni, Sherman Siu, Haonan Chen, Abhranil Chandra, Ziyan Jiang, Aaran Arulraj, Kai Wang, Quy Duc Do, Yuansheng Ni, Bohan Lyu, Yaswanth Narsupalli, Rongqi Fan, Zhiheng Lyu, Yuchen Lin, Wenhu Chen
    Paper Project_Page citation Code

  • Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (24 Jun 2024)

    Junbang Liang, Ruoshi Liu, Ege Ozguroglu, et al.Junbang Liang, Ruoshi Liu, Ege Ozguroglu, Sruthi Sudhakar, Achal Dave, Pavel Tokmakov, Shuran Song, Carl Vondrick
    Paper Project_Page citation

  • ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation (26 Jun 2024)

    Shenghai Yuan, Jinfa Huang, Yongqi Xu, et al.Shenghai Yuan, Jinfa Huang, Yongqi Xu, Yaoyang Liu, Shaofeng Zhang, Yujun Shi, Ruijie Zhu, Xinhua Cheng, Jiebo Luo, Li Yuan
    Paper Project_Page Code

  • [MCM] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation (11 Jun 2024)

    Yuanhao Zhai, Kevin Lin, Zhengyuan Yang, et al.Yuanhao Zhai, Kevin Lin, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Chung-Ching Lin, David Doermann, Junsong Yuan, Lijuan Wang
    Paper Project_Page Code

  • Searching Priors Makes Text-to-Video Synthesis Better (5 Jun 2024)

    Haoran Cheng, Liang Peng, Linxuan Xia, et al.Haoran Cheng, Liang Peng, Linxuan Xia, Yuepeng Hu, Hengjia Li, Qinglin Lu, Xiaofei He, Boxi Wu
    Paper citation Project_Page

  • ZeroSmooth: Training-free Diffuser Adaptation for High Frame Rate Video Generation (3 Jun 2024)

    Shaoshu Yang, Yong Zhang, Xiaodong Cun, et al.Shaoshu Yang, Yong Zhang, Xiaodong Cun, Ying Shan, Ran He
    Paper citation Project_Page

  • CV-VAE: A Compatible Video VAE for Latent Generative Video Models (29 May 2024)

    Jiaqi Xu, Xinyi Zou, Kunzhe Huang, et al.Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Yunkuo Chen, Bo Liu, MengLi Cheng, Xing Shi, Jun Huang
    Paper citation Project_Page Code

  • EasyAnimate: A High-Performance Long Video Generation Method based on Transformer Architecture (30 May 2024)

    Sijie Zhao, Yong Zhang, Xiaodong Cun, et al.Sijie Zhao, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Muyao Niu, Xiaoyu Li, Wenbo Hu, Ying Shan
    Paper citation Project_Page Code

  • [MOFT] Video Diffusion Models are Training-free Motion Interpreter and Controller (23 Mar 2024)

    Zeqi Xiao, Yifan Zhou, Shuai Yang, et al.Zeqi Xiao, Yifan Zhou, Shuai Yang, Xingang Pan
    Paper citation Project_Page

  • StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text (21 Mar 2024)

    Roberto Henschel, Levon Khachatryan, Daniil Hayrapetyan, et al.Roberto Henschel, Levon Khachatryan, Daniil Hayrapetyan, Hayk Poghosyan, Vahram Tadevosyan, Zhangyang Wang, Shant Navasardyan, Humphrey Shi
    Paper citation Code

  • Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis (22 Feb 2024)

    Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, et al.Willi Menapace, Aliaksandr Siarohin, Ivan Skorokhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, Sergey Tulyakov
    Paper citation Project_Page

  • VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models (17 Jan 2024)

    Haoxin Chen, Yong Zhang, Xiaodong Cun, et al.Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, Ying Shan
    Paper citation Project_Page Code

  • VBench: Comprehensive Benchmark Suite for Video Generative Models (29 Nov 2023)

    Ziqi Huang, Yinan He, Jiashuo Yu, et al.Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, Ziwei Liu
    Paper citation Project_Page Code Demo

  • Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets (25 Nov 2023)

    Andreas Blattmann, Tim Dockhorn, Sumith Kulal, et al.Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach
    Paper citation Project_Page Code

  • VideoCrafter1: Open Diffusion Models for High-Quality Video Generation (30 Oct 2023)

    Haoxin Chen, Menghan Xia, Yingqing He, et al.Haoxin Chen, Menghan Xia, Yingqing He, Yong Zhang, Xiaodong Cun, Shaoshu Yang, Jinbo Xing, Yaofang Liu, Qifeng Chen, Xintao Wang, Chao Weng, Ying Shan
    Paper citation Project_Page Code Demo

  • DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors (18 Oct 2023)

    Jinbo Xing, Menghan Xia, Yong Zhang, et al.Jinbo Xing, Menghan Xia, Yong Zhang, Haoxin Chen, Wangbo Yu, Hanyuan Liu, Xintao Wang, Tien-Tsin Wong, Ying Shan
    Paper citation Project_Page Code

  • FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling (23 Oct 2023)

    Haonan Qiu, Menghan Xia, Yong Zhang, et al.Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu
    Paper citation Project_Page Code Demo

  • Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation (13 Jul 2023)

    Yingqing He, Menghan Xia, Haoxin Chen, et al.Yingqing He, Menghan Xia, Haoxin Chen, Xiaodong Cun, Yuan Gong, Jinbo Xing, Yong Zhang, Xintao Wang, Chao Weng, Ying Shan, Qifeng Chen
    Paper citation Project_Page Code

  • Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance (1 Jun 2023)

    Jinbo Xing, Menghan Xia, Yuxin Liu, et al.Jinbo Xing, Menghan Xia, Yuxin Liu, Yuechen Zhang, Yong Zhang, Yingqing He, Hanyuan Liu, Haoxin Chen, Xiaodong Cun, Xintao Wang, Ying Shan, Tien-Tsin Wong
    Paper citation Project_Page Code

  • Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos (3 Apr 2023)

    Yue Ma, Yingqing He, Xiaodong Cun, et al.Yue Ma, Yingqing He, Xiaodong Cun, Xintao Wang, Siran Chen, Ying Shan, Xiu Li, Qifeng Chen
    Paper citation Project_Page Code Demo

  • Real-time Controllable Denoising for Image and Video (29 Mar 2023)

    [CVPR 2023] Zhaoyang Zhang, Yitong Jiang, Wenqi Shao, et al.Zhaoyang Zhang, Yitong Jiang, Wenqi Shao, Xiaogang Wang, Ping Luo, Kaimo Lin, Jinwei Gu
    Paper citation

  • VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation (15 Mar 2023)

    Zhengxiong Luo, Dayou Chen, Yingya Zhang, et al.Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, Tieniu Tan
    Paper citation

Datasets

  • InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation (13 Jul 2023)

    [ICLR 2024 Spotlight] Yi Wang, Yinan He, Yizhuo Li, et al.Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, Conghui He, Ping Luo, Ziwei Liu, Yali Wang, Limin Wang, Yu Qiao
    Paper citation Code Demo

  • [HD-VG-130M] VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation (18 May 2023)

    Wenjing Wang, Huan Yang, Zixi Tuo, et al.Wenjing Wang, Huan Yang, Zixi Tuo, Huiguo He, Junchen Zhu, Jianlong Fu, Jiaying Liu
    Paper citation Code

  • [VideoCC3M] Learning Audio-Video Modalities from Image Captions (18 May 2023)

    [ECCV 2022] Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, et al.Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid
    Paper citation Code

  • CelebV-Text: A Large-Scale Facial Text-Video Dataset (26 Mar 2023)

    [CVPR 2023] Jianhui Yu, Hao Zhu, Liming Jiang, et al.Jianhui Yu, Hao Zhu, Liming Jiang, Chen Change Loy, Weidong Cai, Wayne Wu
    Paper citation Project_Page Code Demo

3D Generation

🔅 LLM-based

  • SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code (2 Mar 2024)

    Ziniu Hu, Ahmet Iscen, Aashi Jain, et al. Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A. Ross, Cordelia Schmid, Alireza Fathi
    Paper

  • MotionScript: Natural Language Descriptions for Expressive 3D Human Motions (19 Dec 2023)

    Payam Jome Yazdian, Eric Liu, Li Cheng, et al. Payam Jome Yazdian, Eric Liu, Li Cheng, Angelica Lim
    Paper citation

  • HOLODECK: Language Guided Generation of 3D Embodied AI Environments (19 Dec 2023)

    [CVPR 2024]Yue Yang, Fan-Yun Sun, Luca Weihs, et al. Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark
    Paper citation Code

  • PoseGPT: Chatting about 3D Human Pose (30 Nov 2023)

    Yao Feng, Jing Lin, Sai Kumar Dwivedi, et al. [CVPR 2024] Yao Feng, Jing Lin, Sai Kumar Dwivedi, Yu Sun, Priyanka Patel, Michael J. Black
    Paper citation Code

  • 3D-GPT: Procedural 3D MODELING WITH LARGE LANGUAGE MODELS (19 Oct 2023)

    Chunyi Sun*, Junlin Han*, Weijian Deng, et al. Chunyi Sun, Junlin Han, Weijian Deng, Xinlong Wang, Zishan Qin, Stephen Gould
    Paper citation Code

Non-LLM-based (Clip/T5)

  • DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion (12 Mar 2024)

    Yuanze Lin, Ronald Clark, Philip Torr. Yuanze Lin, Ronald Clark, Philip Torr
    Paper citation Code

  • Consistent3D: Towards Consistent High-Fidelity Text-to-3D Generation with Deterministic Sampling Prior (12 Mar 2024)

    Zike Wu, Pan Zhou, Xuanyu Yi, et al. [CVPR 2024]Zike Wu, Pan Zhou, Xuanyu Yi, Xiaoding Yuan, Hanwang Zhang
    Paper citation Code

  • AToM: Amortized Text-to-Mesh using 2D Diffusion (1 Feb 2024)

    Guocheng Qian, Junli Cao, Aliaksandr Siarohin, et al. Guocheng Qian, Junli Cao, Aliaksandr Siarohin, Yash Kant, Chaoyang Wang, Michael Vasilkovsky, Hsin-Ying Lee, Yuwei Fang, Ivan Skorokhodov, Peiye Zhuang, Igor Gilitschenski, Jian Ren, Bernard Ghanem, Kfir Aberman, Sergey Tulyakov
    Paper citation Code

  • DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior ( 12 Mar 2024)

    Tianyu Huang, Yihan Zeng, Zhilu Zhang, et al. [CVPR 2024]Tianyu Huang, Yihan Zeng, Zhilu Zhang, Wan Xu, Hang Xu, Songcen Xu, Rynson W. H. Lau, Wangmeng Zuo
    Paper citation Code

  • UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation (14 Dec 2023)

    Zexiang Liu, Yangguang Li, Youtian Lin, et al. Zexiang Liu, Yangguang Li, Youtian Lin, Xin Yu, Sida Peng, Yan-Pei Cao, Xiaojuan Qi, Xiaoshui Huang, Ding Liang, Wanli Ouyang
    Paper citation Code

  • Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D Prior (11 Dec 2023)

    [CVPR 2024] Fangfu Liu, Diankun Wu, Yi Wei, et al. Fangfu Liu, Diankun Wu, Yi Wei, Yongming Rao, Yueqi Duan
    Paper citation Code

  • Learn to Optimize Denoising Scores for 3D Generation: A Unified and Improved Diffusion Prior on NeRF and 3D Gaussian Splatting (8 Dec 2023)

    Xiaofeng Yang, Yiwen Chen, Cheng Chen, et al. Xiaofeng Yang, Yiwen Chen, Cheng Chen, Chi Zhang, Yi Xu, Xulei Yang, Fayao Liu, Guosheng Lin
    Paper citation Code

  • DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling (28 Nov 2023)

    Linqi Zhou, Andy Shih, Chenlin Meng, et al. Linqi Zhou, Andy Shih, Chenlin Meng, Stefano Ermon
    Paper citation Code

  • RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D (28 Nov 2023)

    [CVPR 2024] Lingteng Qiu, Guanying Chen, Xiaodong Gu, et al. Lingteng Qiu, Guanying Chen, Xiaodong Gu, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, Xiaoguang Han
    Paper citation Code

  • DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models (30 Nov 2023)

    [CVPR 2024] Yukang Cao, Yan-Pei Cao, Kai Han, et al. Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, Kwan-Yee K. Wong
    Paper citation Code

  • LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching (2 Dec 2023)

    [CVPR 2024] Yixun Liang, Xin Yang, Jiantao Lin, et al. Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, Yingcong Chen
    Paper citation Code

  • GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models (12 Oct 2023)

    [CVPR 2024] Taoran Yi, Jiemin Fang, Junjie Wang, et al. Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, Xinggang Wang
    Paper citation Code

  • Text-to-3D using Gaussian Splatting (28 Sep 2023)

    [CVPR 2024] Zilong Chen, Feng Wang, Huaping Liu Zilong Chen, Feng Wang, Huaping Liu
    Paper citation Code

  • EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior (10 Sep 2023)

    [CVPR 2024] Zhipeng Hu, Minda Zhao, Chaoyi Zhao, Xinyue Liang, Lincheng Li, Zeng Zhao, Changjie Fan, Xiaowei Zhou, Xin Yu
    Paper citation

  • TADA! Text to Animatable Digital Avatars (21 Aug 2023)

    [3DV 2024] Tingting Liao, Hongwei Yi, Yuliang Xiu, et al.Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang, Justus Thies, Michael J. Black
    Paper citation Code

  • SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D (20 Oct 2023 )

    [ICLR 2024] Weiyu Li, Rui Chen, Xuelin Chen, et al.Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan
    Paper citation Code

  • Noise-Free Score Distillation (26 Oct 2023)

    [ICLR 2024] Oren Katzir, Or Patashnik, Daniel Cohen-Or, et al.Oren Katzir, Or Patashnik, Daniel Cohen-Or, Dani Lischinski
    Paper citation Code

  • Text-to-3D with Classifier Score Distillation (26 Oct 2023 )

    [ICLR 2024] Xin Yu, Yuan-Chen Guo, Yangguang Li, et al. Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, Xiaojuan Qi
    Paper citation Code

  • HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance (28 Nov 2023)

    [ICLR 2024] Junzhe Zhu, Peiye Zhuang. Junzhe Zhu, Peiye Zhuang
    Paper citation Code

  • MVDream: Multi-view Diffusion for 3D Generation (31 Aug 2023)

    [ICLR 2024] Yichun Shi, Peng Wang, Jianglong Ye, et al. Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, Xiao Yang
    Paper citation Code

  • DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation (28 Sep 2023)

    [ICLR 2024] Jiaxiang Tang, Jiawei Ren, Hang Zhou, et al.Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, Gang Zeng
    Paper citation Code

  • Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation (11 Apr 2023)

    [ICLR 2024] Junyoung Seo, Wooseok Jang, Min-Seop Kwak, et al.Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Hyeonsu Kim, Jaehoon Ko, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim
    Paper citation Code

  • IT3D: Improved Text-to-3D Generation with Explicit View Synthesis (22 Aug 2023)

    [AAAI 2024] Yiwen Chen, Chi Zhang, Xiaofeng Yang, et al. Yiwen Chen, Chi Zhang, Xiaofeng Yang, Zhongang Cai, Gang Yu, Lei Yang, Guosheng Lin
    Paper citation Code

  • HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise Estimation (30 Jul 2023)

    [WACV 2024] Jinbo Wu, Xiaobo Gao, Xing Liu, et al. Jinbo Wu, Xiaobo Gao, Xing Liu, Zhengyang Shen, Chen Zhao, Haocheng Feng, Jingtuo Liu, Errui Ding
    Paper citation

  • Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond (11 Apr 2023)

    Mohammadreza Armandpour, Ali Sadeghian, Huangjie Zheng, et al. Mohammadreza Armandpour, Ali Sadeghian, Huangjie Zheng, Amir Sadeghian, Mingyuan Zhou
    Paper citation Code

  • Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures (14 Nov 2022)

    [CVPR 2023] Gal Metzer, Elad Richardson, Or Patashnik, et al.Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or
    Paper citation Code

  • Magic3D: High-Resolution Text-to-3D Content Creation (18 Nov 2022)

    [CVPR 2023 Highlight] Chen-Hsuan Lin, Jun Gao, Luming Tang, et al. Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin
    Paper citation

  • Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation (1 Dec 2022)

    [CVPR 2023] Haochen Wang, Xiaodan Du, Jiahao Li, et al. Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, Greg Shakhnarovich
    Paper citation Code

  • High-fidelity 3D Face Generation from Natural Language Descriptions (5 May 2023)

    [CVPR 2023] Menghua Wu, Hao Zhu, Linjia Huang, et al. Menghua Wu, Hao Zhu, Linjia Huang, Yiyu Zhuang, Yuanxun Lu, Xun Cao
    Paper citation Code

  • RODIN: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion (12 Dec 2022)

    [CVPR 2023 Highlight] Tengfei Wang, Bo Zhang, Ting Zhang, et al. Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo
    Paper citation

  • ClipFace: Text-guided Editing of Textured 3D Morphable Models (24 Apr 2023)

    [SIGGRAPH 2023] Tengfei Wang, Bo Zhang, Ting Zhang, et al. Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo
    Paper citation Code

  • DreamFusion: Text-to-3D using 2D Diffusion (29 Sep 2022)

    [ICLR 2023 Oral] Ben Poole, Ajay Jain, Jonathan T. Barron, et al.Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
    Paper citation

  • ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (25 May 2023)

    [NeurIPS 2023 Spotlight] Zhengyi Wang, Cheng Lu, Yikai Wang, et al. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu
    Paper citation Code

  • HeadSculpt: Crafting 3D Head Avatars with Text (25 May 2023)

    [NeurIPS 2023] Xiao Han, Yukang Cao, Kai Han, et al. Xiao Han, Yukang Cao, Kai Han, Xiatian Zhu, Jiankang Deng, Yi-Zhe Song, Tao Xiang, Kwan-Yee K. Wong
    Paper citation Code

  • ATT3D: Amortized Text-to-3D Object Synthesis (6 Jun 2023)

    [ICCV 2023] Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, et al. Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas
    Paper citation

  • Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation (24 Mar 2023)

    [ICCV 2023] Rui Chen, Yongwei Chen, Ningxin Jiao, et al. Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia
    Paper citation Code

  • Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models (10 Sep 2023)

    [ICCV 2023] Lukas Höllein, Ang Cao, Andrew Owens, et al. Lukas Höllein, Ang Cao, Andrew Owens, Justin Johnson, Matthias Nießner
    Paper citation Code

  • X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance (28 Mar 2023)

    [ICCV 2023] Yiwei Ma, Xiaioqing Zhang, Xiaoshuai Sun, et al.Yiwei Ma, Xiaioqing Zhang, Xiaoshuai Sun, Jiayi Ji, Haowei Wang, Guannan Jiang, Weilin Zhuang, Rongrong Ji
    Paper citation Code

  • StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation (31 May 2023)

    Chi Zhang, Yiwen Chen, Yijun Fu, et al.Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang YU, Billzb Wang, Bin Fu, Tao Chen, Guosheng Lin, Chunhua Shen
    Paper citation Code

  • TextMesh: Generation of Realistic 3D Meshes From Text Prompts (24 Apr 2023)

    [3DV 2023] Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, et al.Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico Tombari
    Paper citation Code

  • Clip-forge: Towards zero-shot text-to-shape generation (28 Apr 2022)

    [CVPR 2022] Aditya Sanghi, Hang Chu, Joseph G. Lambourne, et al. Aditya Sanghi, Hang Chu, Joseph G. Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, Kamal Rahimi Malekshan
    Paper citation Code

  • Zero-Shot Text-Guided Object Generation with Dream Fields (2 Dec 2021)

    [CVPR 2022] Ajay Jain, Ben Mildenhall, Jonathan T. Barron, et al.Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
    Paper citation Project_Page Code

  • Text2Mesh: Text-Driven Neural Stylization for Meshes (6 Dec 2021)

    [CVPR 2022] Oscar Michel, Roi Bar-On, Richard Liu, et al. Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, Rana Hanocka
    Paper citation Code

  • TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition (20 Oct 2022)

    [NeurIPS 2022 Spotlight] Yongwei Chen, Rui Chen, Jiabao Lei, et al. Yongwei Chen, Rui Chen, Jiabao Lei, Yabin Zhang, Kui Jia
    Paper citation Code

  • CLIP-Mesh: Generating textured meshes from text using pretrained image-text models (24 Mar 2022)

    [SIGGRAPH ASIA 2022] Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, et al. Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, Tiberiu Popa
    Paper citation Code

  • MotionCLIP: Exposing Human Motion Generation to CLIP Space (15 Mar 2022)

    [ECCV 2022] Guy Tevet, Brian Gordon, Amir Hertz, et al. Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or
    Paper citation Code

Datasets

  • Objaverse-XL: A Universe of 10M+ 3D Objects (11 Jul 2023)

    Matt Deitke, Dustin Schwenk, Jordi Salvador, et al. Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, Eli VanderBilt, Aniruddha Kembhavi, Carl Vondrick, Georgia Gkioxari, Kiana Ehsani, Ludwig Schmidt, Ali Farhadi
    Paper citation Code

  • Objaverse: A Universe of Annotated 3D Objects (15 Dec 2022)

    [CVPR 2023] Matt Deitke, Dustin Schwenk, Jordi Salvador, et al. Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, Ali Farhadi
    Paper citation Code

Audio Generation

🔅 LLM-based

  • SongComposer: A Large Language Model for Lyric and Melody Composition in Song Generation (27 Feb 2024)

    Shuangrui Ding, Zihan Liu, Xiaoyi Dong, et al.Shuangrui Ding, Zihan Liu, Xiaoyi Dong, Pan Zhang, Rui Qian, Conghui He, Dahua Lin, Jiaqi Wang
    Paper citation Project_Page Code

  • ChatMusician: Understanding and Generating Music Intrinsically with LLM (25 Feb 2024)

    Ruibin Yuan, Hanfeng Lin, Yi Wang, et al.Ruibin Yuan, Hanfeng Lin, Yi Wang, Zeyue Tian, Shangda Wu, Tianhao Shen, Ge Zhang, Yuhang Wu, Cong Liu, Ziya Zhou, Ziyang Ma, Liumeng Xue, Ziyu Wang, Qin Liu, Tianyu Zheng, Yizhi Li, Yinghao Ma, Yiming Liang, Xiaowei Chi, Ruibo Liu, Zili Wang, Pengfei Li, Jingcheng Wu, Chenghua Lin, Qifeng Liu, Tao Jiang, Wenhao Huang, Wenhu Chen, Emmanouil Benetos, Jie Fu, Gus Xia, Roger Dannenberg, Wei Xue, Shiyin Kang, Yike Guo
    Paper citation Project_Page Code Demo

  • AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling (19 Feb 2024)

    Jun Zhan, Junqi Dai, Jiasheng Ye, et al.Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu
    Paper citation Project_Page Code

  • Boosting Large Language Model for Speech Synthesis: An Empirical Study (30 Dec 2023)

    Hongkun Hao, Long Zhou, Shujie Liu, et al.Hongkun Hao, Long Zhou, Shujie Liu, Jinyu Li, Shujie Hu, Rui Wang, Furu Wei
    Paper citation

  • Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action (28 Dec 2023)

    Jiasen Lu, Christopher Clark, Sangho Lee, et al.Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, Aniruddha Kembhavi
    Paper citation Project_Page Code

  • M2UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models (19 Nov 2023)

    Atin Sakkeer Hussain, Shansong Liu, Chenshuo Sun, et al.Atin Sakkeer Hussain, Shansong Liu, Chenshuo Sun, Ying Shan
    Paper citation Project_Page Code Demo

  • LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT (7 Oct 2023)

    Jiaming Wang, Zhihao Du, Qian Chen, et al.Jiaming Wang, Zhihao Du, Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, Wen Wang, Siqi Zheng, Chang Zhou, Zhijie Yan, Shiliang Zhang
    Paper citation Project_Page

  • LLaSM: Large Language and Speech Model (30 Aug 2023)

    Yu Shu, Siwei Dong, Guangyao Chen, et al.Yu Shu, Siwei Dong, Guangyao Chen, Wenhao Huang, Ruihua Zhang, Daochen Shi, Qiqi Xiang, Yemin Shi
    Paper citation Project_Page Code Demo

  • AudioPaLM: A Large Language Model That Can Speak and Listen (22 Jun 2023)

    Paul K. Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, et al.Paul K. Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, Hannah Muckenhirn, Dirk Padfield, James Qin, Danny Rozenberg, Tara Sainath, Johan Schalkwyk, Matt Sharifi, Michelle Tadmor Ramanovich, Marco Tagliasacchi, Alexandru Tudor, Mihajlo Velimirović, Damien Vincent, Jiahui Yu, Yongqiang Wang, Vicky Zayats, Neil Zeghidour, Yu Zhang, Zhishuai Zhang, Lukas Zilka, Christian Frank
    Paper citation Project_Page

  • Pengi: An Audio Language Model for Audio Tasks (19 May 2023)

    Soham Deshmukh, Benjamin Elizalde, Rita Singh, et al.Soham Deshmukh, Benjamin Elizalde, Rita Singh, Huaming Wang
    Paper citation Project_Page Code

  • Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities (18 May 2023)

    Dong Zhang, Shimin Li, Xin Zhang, et al.Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, Xipeng Qiu
    Paper citation Project_Page Code

  • Sparks of Artificial General Intelligence: Early experiments with GPT-4 (22 Mar 2023)

    Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, et al.Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang
    Paper citation

Non-LLM-based

  • Audiobox: Unified Audio Generation with Natural Language Prompts (25 Dec 2023)
    Apoorv Vyas, Bowen Shi, Matthew Le
    Paper citation Project_Page Demo

  • Music ControlNet: Multiple Time-varying Controls for Music Generation (13 Nov 2023)

    Shih-Lun Wu, Chris Donahue, Shinji Watanabe, et al.Shih-Lun Wu, Chris Donahue, Shinji Watanabe, Nicholas J. Bryan
    Paper citation Project_Page

  • Loop Copilot: Conducting AI Ensembles for Music Generation and Iterative Editing (19 Oct 2023)

    Yixiao Zhang, Akira Maezawa, Gus Xia, et al.Yixiao Zhang, Akira Maezawa, Gus Xia, Kazuhiko Yamamoto, Simon Dixon
    Paper citation Project_Page Code

  • MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models (18 Oct 2023)

    Dingyao Yu, Kaitao Song, Peiling Lu, et al.Dingyao Yu, Kaitao Song, Peiling Lu, Tianyu He, Xu Tan, Wei Ye, Shikun Zhang, Jiang Bian
    Paper citation Code

  • UniAudio: An Audio Foundation Model Toward Universal Audio Generation (1 Oct 2023)
    Dongchao Yang, Jinchuan Tian, Xu Tan
    Paper citation Project_Page Code

  • AudioLM: a Language Modeling Approach to Audio Generation (7 Sep 2022)

    Zalán Borsos, Raphaël Marinier, Damien Vincent, et al. (IEEE/ACM Transactions on Audio, Speech, and Language Processing)Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, Neil Zeghidour
    Paper citation

  • Wavjourney: Compositional audio creation with large language models (26 Jul 2023)

    Xubo Liu, Zhongkai Zhu, Haohe Liu, et al.Xubo Liu, Zhongkai Zhu, Haohe Liu, Yi Yuan, Meng Cui, Qiushi Huang, Jinhua Liang, Yin Cao, Qiuqiang Kong, Mark D. Plumbley, Wenwu Wang
    Paper citation Project_Page Code Demo

  • Investigating the Utility of Surprisal from Large Language Models for Speech Synthesis Prosody (16 Jun 2023)

    Sofoklis Kakouros, Juraj Šimko, Martti Vainio, et al. (2023 SSW)Sofoklis Kakouros, Juraj Šimko, Martti Vainio, Antti Suni
    Paper citation

  • Simple and Controllable Music Generation (8 Jun 2023)

    Jade Copet, Felix Kreuk, Itai Gat, et al.Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez
    Paper citation Project_Page Code Demo

  • Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation (29 May 2023)

    Jiawei Huang, Yi Ren, Rongjie Huang, et al.Jiawei Huang, Yi Ren, Rongjie Huang, Dongchao Yang, Zhenhui Ye, Chen Zhang, Jinglin Liu, Xiang Yin, Zejun Ma, Zhou Zhao
    Paper citation Project_Page

  • Audiogpt: Understanding and generating speech, music, sound, and talking head (25 Apr 2023)

    Rongjie Huang, Mingze Li, Dongchao Yang, et al.Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, Shinji Watanabe
    Paper citation Code Demo

  • TANGO: Text-to-Audio Generation using Instruction Tuned LLM and Latent Diffusion Model (24 Apr 2023)

    Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, et al.Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, Soujanya Poria
    Paper citation Project_Page Code Demo

  • Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface (30 Mar 2023)

    Yongliang Shen, Kaitao Song, Xu Tan, et al.Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
    Paper citation Code Demo

  • Neural codec language models are zero-shot text to speech synthesizers (5 Jan 2023)

    Chengyi Wang, Sanyuan Chen, Yu Wu, et al.Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, Furu Wei
    Paper citation Project_Page

  • MusicLM: Generating Music From Text (26 Jan 2023)

    Andrea Agostinelli, Timo I. Denk, Zalán Borsos, et al.Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank
    Paper citation Project_Page

Datasets

  • Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context (15 Sep 2023)
    Wei Kang, Xiaoyu Yang, Zengwei Yao, et al.Wei Kang, Xiaoyu Yang, Zengwei Yao, Fangjun Kuang, Yifan Yang, Liyong Guo, Long Lin, Daniel Povey
    Paper citation

Generation with Multiple Modalities

🔅 LLM-based

  • C3LLM: Conditional Multimodal Content Generation Using Large Language Models (25 May 2024)

    Zixuan Wang, Qinkai Duan, Yu-Wing Tai, et al.Zixuan Wang, Qinkai Duan, Yu-Wing Tai, Chi-Keung Tang
    Paper citation

  • CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation (30 Nov 2023)

    Zineng Tang, Ziyi Yang, Mahmoud Khademi, et al.Zineng Tang, Ziyi Yang, Mahmoud Khademi, Yang Liu, Chenguang Zhu, Mohit Bansal
    Paper citation Project_Page Code

  • TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models (8 Nov 2023)

    Zhen Yang, Yingxue Zhang, Fandong Meng, et al.Zhen Yang, Yingxue Zhang, Fandong Meng, Jie Zhou
    Paper citation tokenizer

  • NExT-GPT: Any-to-Any Multimodal LLM (11 Sep 2023)

    Shengqiong Wu, Hao Fei, Leigang Qu, et al.Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua
    Paper citation Project_Page Code Demo

  • CoDi: Any-to-Any Generation via Composable Diffusion (19 May 2023)

    [NeurIPS 2023] Zineng Tang, Ziyi Yang, Chenguang Zhu, et al.Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, Mohit Bansal
    Paper citation Code Project_Page

Non-LLM-based

  • DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation (9 Jan 2024)

    [CVPR 2024] Junming Chen, et al.Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen
    Paper citation Project_Page Code

  • TAVGBench: Benchmarking Text to Audible-Video Generation (22 Apr 2024)

    Yuxin Mao, Xuyang Shen, Jing Zhang, et al.Yuxin Mao, Xuyang Shen, Jing Zhang, Zhen Qin, Jinxing Zhou, Mochu Xiang, Yiran Zhong, Yuchao Dai
    Paper citation Code

  • Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners (27 Feb 2024)

    [CVPR 2024] Yazhou Xing, Yingqing He, Zeyue Tian, et al.Yazhou Xing, Yingqing He, Zeyue Tian, Xintao Wang, Qifeng Chen
    Paper citation Code

📍 Multimodal Editing

Image Editing

🔅 LLM-based

  • UltraEdit: Instruction-based Fine-Grained Image Editing at Scale (7 Jul 2024)

    Haozhe Zhao, Xiaojian Ma, Liang Chen, et al. Haozhe Zhao, Xiaojian Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, Baobao Chang
    Paper citation Project_Page Code

  • SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models (11 Dec 2023)

    [CVPR 2024] Yuzhou Huang, Liangbin Xie, Xintao Wang, et al. Yuzhou Huang, Liangbin Xie, Xintao Wang, Ziyang Yuan, Xiaodong Cun, Yixiao Ge, Jiantao Zhou, Chao Dong, Rui Huang, Ruimao Zhang, Ying Shan
    Paper citation Project_Page Code

  • Self-correcting LLM-controlled Diffusion Models (27 Nov 2023)

    [CVPR 2024] Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, et al. Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, Boyi Li, Trevor Darrell
    Paper citation

  • Emu Edit: Precise Image Editing via Recognition and Generation Tasks (16 Nov 2023)

    [ArXiv 2023] Shelly Sheynin, Adam Polyak, Uriel Singer, et al. Shelly Sheynin, Adam Polyak, Uriel Singer, Yuval Kirstain, Amit Zohar, Oron Ashual, Devi Parikh, Yaniv Taigman
    Paper citation Project_Page

  • Guiding Instruction-based Image Editing via Multimodal Large Language Models

    [ICLR 2024 (Spotlight)] Tsu-Jui Fu, Wenze Hu, Xianzhi Du, et al. Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, Zhe Gan
    Paper citation Project_Page Code

  • CHATEDIT: Towards Multi-turn Interactive Facial Image Editing via Dialogue (20 Mar 2023)

    [EMNLP 2023] Xing Cui, Zekun Li, Peipei Li, et al. Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He
    Paper citation Code

  • HIVE: Harnessing Human Feedback for Instructional Visual Editing (16 Mar 2023)

    Shu Zhang, Xinyi Yang, Yihao Feng, et al. Shu Zhang, Xinyi Yang, Yihao Feng, Can Qin, Chia-Chih Chen, Ning Yu, Zeyuan Chen, Huan Wang, Silvio Savarese, Stefano Ermon, Caiming Xiong, Ran Xu.
    Paper citation Project_Page Code

  • Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models (8 Mar 2023)

    Chenfei Wu, Shengming Yin, Weizhen Qi, et al. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan
    Paper citation Code Demo

  • InstructPix2Pix: Learning to Follow Image Editing Instructions (17 Nov 2022)
    [CVPR 2023 (Highlight)] Brooks, Tim, Aleksander Holynski, and Alexei A. Efros.
    Paper citation Project_Page Code

Non-LLM-based (Clip/T5)

  • DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing (4 Feb 2024)

    [CVPR 2024] Chong Mou, Xintao Wang, Jiechong Song, et al.Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang.
    Paper citation Code

  • ZONE: Zero-Shot Instruction-Guided Local Editing (28 Dec 2023)

    Shanglin Li, Bohan Zeng, Yutang Feng, et al.Shanglin Li, Bohan Zeng, Yutang Feng, Sicheng Gao, Xuhui Liu, Jiaming Liu, Li Lin, Xu Tang, Yao Hu, Jianzhuang Liu, Baochang Zhang.
    Paper citation

  • Watch Your Steps: Local Image and Scene Editing by Text Instructions (17 Aug 2023 )

    Ashkan Mirzaei, Tristan Aumentado-Armstrong, Marcus A. Brubaker, et al.Ashkan Mirzaei, Tristan Aumentado-Armstrong, Marcus A. Brubaker, Jonathan Kelly, Alex Levinshtein, Konstantinos G. Derpanis, Igor Gilitschenski.
    Paper citation Project_Page

  • Dragondiffusion: Enabling drag-style manipulation on diffusion models (5 Jul 2023)

    [ICLR 2024] Chong Mou, Xintao Wang, Jiechong Song, et al.Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang.
    Paper citation Project_Page Code

  • Differential Diffusion: Giving Each Pixel Its Strength (1 Jun 2023)

    [Arxiv 2023] Thao Nguyen, Yuheng Li, Utkarsh Ojha, et al.Thao Nguyen, Yuheng Li, Utkarsh Ojha, Yong Jae Lee
    Paper citation Project_Page Code

  • Visual Instruction Inversion: Image Editing via Visual Prompting (26 Jul 2023)

    [ArXiv 2023] Thao Nguyen, Yuheng Li, Utkarsh Ojha, et al. Thao Nguyen, Yuheng Li, Utkarsh Ojha, Yong Jae Lee.
    Paper citation Project_Page Code

  • MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing (17 Apr 2023)

    [ICCV 2023] Mingdeng Cao, Xintao Wang, Zhongang Qi, et al. Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, Yinqiang Zheng.
    Paper citation Project_Page Code

  • PAIR-Diffusion: A Comprehensive Multimodal Object-Level Image Editor (30 Mar 2023)

    [ArXiv 2023] Vidit Goel, Elia Peruzzo, Yifan Jiang, et al. Vidit Goel, Elia Peruzzo, Yifan Jiang, Dejia Xu, Xingqian Xu, Nicu Sebe, Trevor Darrell, Zhangyang Wang, Humphrey Shi.
    Paper citation Project_Page Code

  • Zero-shot Image-to-Image Translation (6 Feb 2023)

    [SIGGRAPH 2023] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, et al. Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu.
    Paper citation Project_Page Code

  • SINE: SINgle Image Editing with Text-to-Image Diffusion Models (8 Dec 2022)

    [CVPR 2023] Zhixing Zhang, Ligong Han, Arnab Ghosh, et al. Zhixing Zhang, Ligong Han, Arnab Ghosh, Dimitris Metaxas, Jian Ren.
    Paper citation Project_Page Code

  • Interactive Image Manipulation with Complex Text Instructions (25 Nov 2022)

    [WACV 2023] Ryugo Morita, Zhiqiang Zhang, Man M. Ho, et al. Ryugo Morita, Zhiqiang Zhang, Man M. Ho, Jinjia Zhou.
    Paper citation

  • Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation (22 Nov 2022)

    [CVPR 2023] Narek Tumanyan, Michal Geyer, Shai Bagon, et al. Narek Tumanyan, Michal Geyer, Shai Bagon, Tali Dekel.
    Paper citation Project_Page Code

  • Imagic: Text-Based Real Image Editing with Diffusion Models (17 Oct 2022)

    [CVPR 2023] Bahjat Kawar, Shiran Zada, Oran Lang, et al. Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani.
    Paper citation Project_Page

  • Null-text Inversion for Editing Real Images using Guided Diffusion Models

    [ICLR 2023] Ron Mokady, Amir Hertz, Kfir Aberman, et al. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, Daniel Cohen-Or.
    Paper citation Project_Page Code

  • Prompt-to-Prompt Image Editing with Cross Attention Control

    [ICLR 2023] Amir Hertz, Ron Mokady, Jay Tenenbaum, et al. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, Daniel Cohen-Or.
    Paper citation Project_Page Code

  • DiffEdit: Diffusion-based semantic image editing with mask guidance (20 Oct 2022)

    [ICLR 2023] Guillaume Couairon, Jakob Verbeek, Holger Schwenk, et al. Guillaume Couairon, Jakob Verbeek, Holger Schwenk, Matthieu Cord.
    Paper citation

Video Editing

🔅 LLM-based

  • CONSISTENT VIDEO-TO-VIDEO TRANSFER USING SYNTHETIC DATASET (1 Nov 2023)
    Jiaxin Cheng, Tianjun Xiao, Tong He.
    Paper citation Code

  • InstructVid2Vid: Controllable Video Editing with Natural Language Instructions (21 May 2023)

    Bosheng Qin, Juncheng Li, Siliang Tang, et al.Bosheng Qin, Juncheng Li, Siliang Tang, Tat-Seng Chua, Yueting Zhuang.
    Paper citation

Non-LLM-based (Clip/T5)

  • AudioScenic: Audio-Driven Video Scene Editing (25 Apr 2024)

    Kaixin Shen, Ruijie Quan, Linchao Zhu, et al.Kaixin Shen, Ruijie Quan, Linchao Zhu, Jun Xiao, Yi Yang
    Paper citation

  • LATENTWARP: CONSISTENT DIFFUSION LATENTS FOR ZERO-SHOT VIDEO-TO-VIDEO TRANSLATION (1 Nov 2023)

    Yuxiang Bao, Di Qiu, Guoliang Kang, et al.Yuxiang Bao, Di Qiu, Guoliang Kang, Baochang Zhang, Bo Jin, Kaiye Wang, Pengfei Yan.
    Paper citation

  • MagicStick: Controllable Video Editing via Control Handle Transformations (1 Nov 2023)

    Yue Ma, Xiaodong Cun, Yingqing He, et al.Yue Ma, Xiaodong Cun, Yingqing He, Chenyang Qi, Xintao Wang, Ying Shan, Xiu Li, Qifeng Chen
    Paper citation ) ) Project_Page Code

  • MagicEdit: High-Fidelity Temporally Coherent Video Editing (28 Aug 2023)

    Jun Hao Liew, Hanshu Yan, Jianfeng Zhang, et al.Jun Hao Liew, Hanshu Yan, Jianfeng Zhang, Zhongcong Xu, Jiashi Feng.
    Paper citation Project_Page Code

  • StableVideo: Text-driven Consistency-aware Diffusion Video Editing (18 Aug 2023)

    [ICCV 2023] Wenhao Chai, Xun Guo, Gaoang Wang, et al.Wenhao Chai, Xun Guo, Gaoang Wang, Yan Lu.
    Paper citation Code

  • CoDeF: Content Deformation Fields for Temporally Consistent Video Processing (15 Aug 2023)

    Hao Ouyang, Qiuyu Wang, Yuxi Xiao, et al.Hao Ouyang, Qiuyu Wang, Yuxi Xiao, Qingyan Bai, Juntao Zhang, Kecheng Zheng, Xiaowei Zhou, Qifeng Chen.
    Paper citation Project_Page Code

  • TokenFlow: Consistent Diffusion Features for Consistent Video Editing (19 Jul 2023)

    Michal Geyer, Omer Bar-Tal, Shai Bagon, et al.Michal Geyer, Omer Bar-Tal, Shai Bagon, Tali Dekel.
    Paper citation Project_Page Code

  • Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation (13 Jun 2023)

    Shuai Yang, Yifan Zhou, Ziwei Liu, et al.Shuai Yang, Yifan Zhou, Ziwei Liu, Chen Change Loy.
    Paper citation Project_Page Code

  • ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing (26 May 2023)

    Min Zhao, Rongzhen Wang, Fan Bao, et al.Min Zhao, Rongzhen Wang, Fan Bao, Chongxuan Li, Jun Zhu.
    Paper citation Project_Page Code

  • Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts (15 May 2023) Michal Geyer, Omer Bar-Tal, Shai Bagon, Tali Dekel.
    Paper citation Project_Page Code

  • Pix2Video: Video Editing using Image Diffusion (22 Mar 2023)
    [ICCV 2023] Ceylan, Duygu, Chun-Hao P. Huang, and Niloy J. Mitra.
    Paper citation Project_Page Code

  • FateZero: Fusing Attentions for Zero-shot Text-based Video Editing (16 Mar 2023)

    [ICCV 2023] Chenyang Qi, Xiaodong Cun, Yong Zhang, et al.Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen.
    Paper citation Project_Page Code

  • Video-P2P: Video Editing with Cross-attention Control (8 Mar 2023)

    Shaoteng Liu, Yuechen Zhang, Wenbo Li, et al.Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, Jiaya Jia.
    Paper citation Project_Page Code

  • Dreamix: Video Diffusion Models are General Video Editors (2 Feb 2023)

    Eyal Molad, Eliahu Horwitz, Dani Valevski, et al.Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen.
    Paper citation Project_Page

  • Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation (22 Dec 2022)

    [ICCV 2023] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, et al.Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, Mike Zheng Shou.
    Paper citation Project_Page Code

3D Editing

🔅 LLM-based

  • SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code (2 Mar 2024)

    Ziniu Hu, Ahmet Iscen, Aashi Jain, et al. Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A. Ross, Cordelia Schmid, Alireza Fathi
    Paper

  • 3D-GPT: Procedural 3D MODELING WITH LARGE LANGUAGE MODELS (19 Oct 2023)

    Chunyi Sun*, Junlin Han*, Weijian Deng, et al. Chunyi Sun, Junlin Han, Weijian Deng, Xinlong Wang, Zishan Qin, Stephen Gould
    Paper citation Code

Non-LLM-based (Clip/T5)

  • Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models (16 Nov 2023)

    Xianfang Zeng, Xin Chen, Zhongqi Qi, et al.Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, Gang Yu
    Paper citation Code

  • 3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score Distillation (16 Nov 2023)

    Dale Decatur, Itai Lang, Kfir Aberman, et al.Dale Decatur, Itai Lang, Kfir Aberman, Rana Hanocka
    Paper citation Code

  • Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields (23 Aug 2023)

    Hyeonseop Song, Seokhun Choi, Hoseok Do, et al. Hyeonseop Song, Seokhun Choi, Hoseok Do, Chul Lee, Taehyeong Kim
    Paper citation

  • SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field (23 Mar 2023)

    [CVPR 2023] Chong Bao, Yinda Zhang, Bangbang Yang, et al.Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
    Paper citation Code

  • TextDeformer: Geometry Manipulation using Text Guidance (26 Apr 2023)

    [TVCG 2022] William Gao, Noam Aigerman, Thibault Groueix, et al.William Gao, Noam Aigerman, Thibault Groueix, Vladimir G. Kim, Rana Hanocka
    Paper citation Code

  • Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (22 Mar 2023)

    [SIGGRAPH Asia 2023] Ayaan Haque, Matthew Tancik, Alexei A. Efros, et al. Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa
    Paper citation Code

  • DreamEditor: Text-Driven 3D Scene Editing with Neural Fields (23 Jun 2023)

    [SIGGRAPH Asia 2023] Jingyu Zhuang, Chen Wang, Lingjie Liu, et al. Jingyu Zhuang, Chen Wang, Lingjie Liu, Liang Lin, Guanbin Li
    Paper citation Code

  • SKED: Sketch-guided Text-based 3D Editing (19 Mar 2023)

    [ICCV 2023] Aryan Mikaeili, Or Perel, Mehdi Safaee, et al.Aryan Mikaeili, Or Perel, Mehdi Safaee, Daniel Cohen-Or, Ali Mahdavi-Amiri
    Paper citation Code

  • Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields (22 Jun 2023)

    [ICCVW 2023] Ori Gordon, Omri Avrahami, Dani Lischinski.Ori Gordon, Omri Avrahami, Dani Lischinski
    Paper citation

  • ClipFace: Text-guided Editing of Textured 3D Morphable Modelssting Neural Radiance Fields (2 Dec 2022)

    [SIGGRAPH 2023] Shivangi Aneja, Justus Thies, Angela Dai, et al. Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner
    Paper citation Code

Audio Editing

🔅 LLM-based

  • Loop Copilot: Conducting AI Ensembles for Music Generation and Iterative Editing (19 Oct 2023)

    Yixiao Zhang, Akira Maezawa, Gus Xia, et al.Yixiao Zhang, Akira Maezawa, Gus Xia, Kazuhiko Yamamoto, Simon Dixon
    Paper citation Project_Page Code

  • UniAudio: An Audio Foundation Model Toward Universal Audio Generation (1 Oct 2023)
    Dongchao Yang, Jinchuan Tian, Xu Tan
    Paper citation Project_Page Code

Non-LLM-based (Clip/T5)

📍 Multimodal Agents

  • LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing (1 Nov 2023)

    Wei-Ge Chen, Irina Spiridonova, Jianwei Yang, et al. Wei-Ge Chen, Irina Spiridonova, Jianwei Yang, Jianfeng Gao, Chunyuan Li
    Paper citation Project_Page Code Demo
    Tags: Image Chat Image Segmentation, Image Generation Image Editing

  • ControlLLM: Augment Language Models with Tools by Searching on Graphs (26 Oct 2023)

    Zhaoyang Liu, Zeqiang Lai, Zhangwei Gao, et al.Zhaoyang Liu, Zeqiang Lai, Zhangwei Gao, Erfei Cui, Ziheng Li, Xizhou Zhu, Lewei Lu, Qifeng Chen, Yu Qiao, Jifeng Dai, Wenhai Wang
    Paper citation Project_Page Code Demo
    Tags: Image Understanding Image Generation Image Editing Video Understanding Video Generation Video Editing Audio Understanding Audio Generation

  • ImageBind-LLM: Multi-modality Instruction Tuning (7 Sep 2023)

    Jiaming Han, Renrui Zhang, Wenqi Shao, et al.Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, Xudong Lu, Shuai Ren, Yafei Wen, Xiaoxin Chen, Xiangyu Yue, Hongsheng Li, Yu Qiao
    Paper citation Code
    Modalities: text image video audio point cloud

  • ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models (2 Sep 2023)

    Chenliang Li, Hehong Chen, Ming Yan, et al.Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
    Paper citation Code

  • InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language (9 May 2023)

    Zhaoyang Liu, Yinan He, Wenhai Wang, et al.Zhaoyang Liu, Yinan He, Wenhai Wang, Weiyun Wang, Yi Wang, Shoufa Chen, Qinglong Zhang, Zeqiang Lai, Yang Yang, Qingyun Li, Jiashuo Yu, Kunchang Li, Zhe Chen, Xue Yang, Xizhou Zhu, Yali Wang, Limin Wang, Ping Luo, Jifeng Dai, Yu Qiao
    Paper citation Code Demo
    Condition Modality: text image video audio

  • HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face (30 Mar 2023)

    Yongliang Shen, Kaitao Song, Xu Tan, et al.Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
    Paper citation Code Demo

  • Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models (8 Mar 2023)

    Chenfei Wu, Shengming Yin, Weizhen Qi, et al.Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan
    Paper citation Code Demo

  • AutoGPT: build & use AI agents
    Project_Page Code

📍 Multimodal Understanding with LLMs

Multiple modalities

  • Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities (9 Nov 2023)
    [CVPR 2024] AJ Piergiovanni, Isaac Noble, Dahun Kim, et al.AJ Piergiovanni, Isaac Noble, Dahun Kim, Michael S. Ryoo, Victor Gomes, Anelia Angelova
    Paper citation text, video, audio

Image Understanding

  • Image Textualization: An Automatic Framework for Creating Accurate and Detailed Image Descriptions (11 June 2024)

    Renjie Pi, Jianshu Zhang, Jipeng Zhang et al. Renjie Pi, Jianshu Zhang, Jipeng Zhang, Rui Pan, Zhekai Chen, Tong Zhang
    Paper citation

  • T2S-GPT: Dynamic Vector Quantization for Autoregressive Sign Language Production from Text (11 June 2024)

    [ACL 2024] Aoxiong Yin, Haoyuan Li, Kai Shen et al. Aoxiong Yin, Haoyuan Li, Kai Shen, Siliang Tang, Yueting Zhuang
    Paper citation

  • Open-World Human-Object Interaction Detection via Multi-modal Prompts (11 June 2024)

    Jie Yang, Bingliang Li, Ailing Zeng et al.Jie Yang, Bingliang Li, Ailing Zeng, Lei Zhang, Ruimao Zhang
    Paper citation

  • Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? (11 June 2024)

    Xingyu Fu, Muyu He, Yujie Lu et al.Xingyu Fu, Muyu He, Yujie Lu, William Yang Wang, Dan Roth
    Paper citation

  • InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks (21 Dec 2023)

    Zhe Chen, Jiannan Wu, Wenhai Wang, et al.Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai
    Paper citation Code Demo

  • LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (28 Nov 2023)
    Yanwei Li, Chengyao Wang, Jiaya Jia
    Paper citation Project_Page Code Demo

  • CogVLM: Visual Expert for Pretrained Language Models (6 Nov 2023)

    Weihan Wang, Qingsong Lv, Wenmeng Yu, et al.Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang
    Paper citation Code Demo

  • MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning (14 Oct 2023)

    Jun Chen, Deyao Zhu, Xiaoqian Shen, et al.Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, Mohamed Elhoseiny
    Paper citation Project_Page Code Demo

  • OphGLM: Training an Ophthalmology Large Language-and-Vision Assistant based on Instructions and Dialogue (21 Jun 2023)

    Weihao Gao, Zhuo Deng, Zhiyuan Niu, et al.Weihao Gao, Zhuo Deng, Zhiyuan Niu, Fuju Rong, Chucheng Chen, Zheng Gong, Wenze Zhang, Daimin Xiao, Fang Li, Zhenjie Cao, Zhaoyi Ma, Wenbin Wei, Lan Ma
    Paper citation Project_Page Code

  • InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition (26 Sep 2023)

    Pan Zhang, Xiaoyi Dong, Bin Wang, et al. Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Haodong Duan, Songyang Zhang, Shuangrui Ding, Wenwei Zhang, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, Jiaqi Wang
    Paper citation Code

  • [LaVIT] Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization (9 Sep 2023)

    Yang Jin, Kun Xu, Kun Xu, et al.Yang Jin, Kun Xu, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Quzhe Huang, Bin Chen, Chenyi Lei, An Liu, Chengru Song, Xiaoqiang Lei, Di Zhang, Wenwu Ou, Kun Gai, Yadong Mu
    Paper citation Code tokenizer

  • Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond (24 Aug 2023)

    Jinze Bai, Shuai Bai, Shusheng Yang, et al.Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
    Paper citation Project_Page Code Demo

  • VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks (18 May 2023)

    [NeurIPS 2023] Wenhai Wang, Zhe Chen, Xiaokang Chen, et al.Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, Jifeng Dai
    Paper citation Code Demo

  • InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning (11 May 2023)

    Wenliang Dai, Junnan Li, Dongxu Li, et al.Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi
    Paper citation Code

  • MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models (20 Apr 2023)

    Deyao Zhu, Jun Chen, Xiaoqian Shen, et al.Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, Mohamed Elhoseiny
    Paper citation Project_Page Code Demo

  • Visual Instruction Tuning (17 Apr 2023)

    [NeurIPS 2023 (Oral)] Liu, Haotian, et al.Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
    Paper citation Project_Page Code Demo

Video Understanding

  • Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding (22 Sep 2024)

    Yan Shu, Peitian Zhang, Zheng Liu, et al.Yan Shu, Peitian Zhang, Zheng Liu, Minghao Qin, Junjie Zhou, Tiejun Huang, Bo Zhao
    Paper citation Code

  • Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution (19 Sep 2024)

    Zuyan Liu, Yuhao Dong, Ziwei Liu, et al.Zuyan Liu, Yuhao Dong, Ziwei Liu, Winston Hu, Jiwen Lu, Yongming Rao
    Paper citation Project_Page Code

  • VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs (25 Apr 2024)

    Zesen Cheng, Sicong Leng, Hang Zhang, et al.Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, Lidong Bing
    Paper citation Code

  • PLLaVA: Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning (25 Apr 2024)

    Lin Xu, Yilin Zhao, Daquan Zhou, et al.Lin Xu, Yilin Zhao, Daquan Zhou, Zhijie Lin, See Kiong Ng, Jiashi Feng
    Paper citation Code

  • MovieChat: From Dense Token to Sparse Memory for Long Video Understanding (3 Dec 2023)
    Enxin, Song, et al.
    Paper citation Code

  • LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (28 Nov 2023)
    Yanwei, Li, et al.
    Paper citation Code

  • Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models (27 Nov 2023)
    Ning, Munan, et al.
    Paper citation Code

  • PG-Video-LLaVA: Pixel Grounding Large Video-Language Models (22 Nov 2023)
    Munasinghe, Shehan, et al.
    Paper citation Code Project_Page

  • Video-LLaVA: Learning United Visual Representation by Alignment Before Projection (16 Nov 2023)
    Lin, Bin, et al.
    Paper citation Code Demo

  • Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding (14 Nov 2023)
    Jin, Peng, et al.
    Paper citation Code Demo

  • Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding (5 Jun 2023)
    Zhang, Hang, Xin Li, and Lidong Bing. EMNLP 2023's demo track.
    Paper citation Code Demo

  • AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos? (31 Jul 2023)
    Zhao, Qi, et al.
    Paper citation Project_Page

  • Valley: Video Assistant with Large Language model Enhanced ability (12 Jun 2023)
    Luo, Ruipu, et al.
    Paper citation Project_Page Code

  • Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models (8 Jun 2023)
    Muhammad Maaz, Hanoona Rasheed, Salman Khan, et al.
    Paper citation Code

  • VideoChat: Chat-Centric Video Understanding (10 May 2023)
    Li, KunChang, et al.
    Paper citation Code

  • VideoLLM: Modeling Video Sequence with Large Language Models (22 May 2023)
    Chen, Guo, et al.
    Paper citation Code

  • Learning video embedding space with Natural Language Supervision (25 Mar 2023)
    Uppala, Phani Krishna, Shriti Priya, and Vaidehi Joshi.
    Paper citation

3D Understanding

  • LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning (30 Nov 2023)

    [CVPR2024]Sijin Chen, Xin Chen, Chi Zhang, et al. [CVPR 2024] Sijin Chen, Xin Chen, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, Tao Chen
    Paper citation Code

  • LiDAR-LLM: Exploring the Potential of Large Language Models for 3D LiDAR Understanding (21 Dec 2023)
    Senqiao Yang*, Jiaming Liu*, Ray Zhang, et al.
    Paper citation

  • 3D-LLM: Injecting the 3D World into Large Language Models (24 Jul 2023)

    [NeurIPS 2023 Spotlight] Yining Hong, Haoyu Zhen, Peihao Chen, et al.Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, Chuang Gan
    Paper citation Code

  • PointLLM: Empowering Large Language Models to Understand Point Clouds (31 Aug 2023)

    [NeurIPS 2023 Spotlight] Runsen Xu, Xiaolong Wang, Tai Wang, et al.Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, Dahua Lin
    Paper citation Code

  • PointCLIP: Point Cloud Understanding by CLIP (31 Aug 2023)

    [CVPR 2022] Renrui Zhang, Ziyu Guo, Wei Zhang,, et al. Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, Hongsheng Li
    Paper citation Code

Audio Understanding

  • Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action (28 Dec 2023)

    Jiasen Lu, Christopher Clark, Sangho Lee, et al.Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, Aniruddha Kembhavi
    Paper citation Project_Page Code

  • M2UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models (19 Nov 2023)

    Atin Sakkeer Hussain, Shansong Liu, Chenshuo Sun, et al.Atin Sakkeer Hussain, Shansong Liu, Chenshuo Sun, Ying Shan
    Paper citation Project_Page Code Demo

  • Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models (14 Nov 2023)

    Yunfei Chu, Jin Xu, Xiaohuan Zhou, et al.Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, Jingren Zhou
    Paper citation Project_Page

  • SALMONN: Towards Generic Hearing Abilities for Large Language Models (20 Oct 2023)

    Changli Tang, Wenyi Yu, Guangzhi Sun, et al.Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
    Paper citation Project_Page Code Demo

  • MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models (18 Oct 2023)

    Dingyao Yu, Kaitao Song, Peiling Lu, et al.Dingyao Yu, Kaitao Song, Peiling Lu, Tianyu He, Xu Tan, Wei Ye, Shikun Zhang, Jiang Bian
    Paper citation Code

  • Llark: A multimodal foundation model for music (11 Oct 2023)

    Josh Gardner, Simon Durand, Daniel Stoller, et al.Josh Gardner, Simon Durand, Daniel Stoller, Rachel M. Bittner
    Paper citation Project_Page Code

  • LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT (7 Oct 2023)

    Jiaming Wang, Zhihao Du, Qian Chen, et al.Jiaming Wang, Zhihao Du, Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, Wen Wang, Siqi Zheng, Chang Zhou, Zhijie Yan, Shiliang Zhang
    Paper citation Project_Page

  • Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation (29 Sep 2023)

    Shih-Lun Wu, Xuankai Chang, Gordon Wichern, et al.Shih-Lun Wu, Xuankai Chang, Gordon Wichern, Jee-weon Jung, François Germain, Jonathan Le Roux, Shinji Watanabe
    Paper citation

  • Connecting Speech Encoder and Large Language Model for ASR (25 Sep 2023)

    Wenyi Yu, Changli Tang, Guangzhi Sun, et al.Wenyi Yu, Changli Tang, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
    Paper citation

  • Can Whisper perform speech-based in-context learning (13 Sep 2023)

    Siyin Wang, Chao-Han Huck Yang, Ji Wu, et al.Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang
    Paper citation

  • Music understanding LLaMA: Advancing text-to-music generation with question answering and captioning (22 Aug 2023)

    Shansong Liu, Atin Sakkeer Hussain, Chenshuo Sun, et al.Shansong Liu, Atin Sakkeer Hussain, Chenshuo Sun, Ying Shan
    Paper citation Project_Page Code Demo

  • On decoder-only architecture for speech-to-text and large language model integration (8 Jul 2023)

    Jian Wu, Yashesh Gaur, Zhuo Chen, et al.Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shujie Liu, Bo Ren, Linquan Liu, Yu Wu
    Paper [![citation](https://img.shields.i