text-to-motion
There are 21 repositories under text-to-motion topic.
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
OpenMotionLab/MotionGPT
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
EricGuo5513/momask-codes
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
showlab/MotionDirector
[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
ChenFengYe/motion-latent-diffusion
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
see2023/Bert-VITS2-ext
基于Bert-VITS2做的表情、动画测试. Animation testing based on Bert-VITS2.
fyyakaxyy/AnimationGPT
AnimationGPT:An AIGC tool for generating game combat motion assets
qiqiApink/MotionGPT
The official PyTorch implementation of the paper "MotionGPT: Finetuned LLMs are General-Purpose Motion Generators"
LinghaoChan/UniMoCap
[Open-source Project] UniMoCap: community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL) and whole-body motion dataset (Motion-X).
ALEEEHU/Awesome-Text2X-Resources
This is an open collection of state-of-the-art (SOTA), novel Text to X (X can be everything) methods (papers, codes and datasets).
steve-zeyu-zhang/MotionMamba
🔥 [ECCV 2024] Motion Mamba: Efficient and Long Sequence Motion Generation
EricGuo5513/TM2T
Official implementation of "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022)"
sato-team/Stable-Text-to-Motion-Framework
SATO: Stable Text-to-Motion Framework
exitudio/MMM
Official repository for "MMM: Generative Masked Motion Model" (CVPR 2024 -- Highlight)
qrzou/ParCo
[ECCV 2024] Official PyTorch implement of paper "ParCo: Part-Coordinating Text-to-Motion Synthesis": http://arxiv.org/abs/2403.18512
exitudio/BAMM
Official repository for "BAMM: Bidirectional Autoregressive Motion Model (ECCV 2024)"
steve-zeyu-zhang/MotionAvatar
[BMVC 2024] Motion Avatar: Generate Human and Animal Avatars with Arbitrary Motion
zhshj0110/Awesome-Motion-Diffusion-Models
A collection of resources and papers on Motion Diffusion Models.
steve-zeyu-zhang/KMM
KMM: Key Frame Mask Mamba for Extended Motion Generation
steve-zeyu-zhang/InfiniMotion
InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation
sony/MoLA
Pytorch implementation of MoLA