hywang2002
Bachelor in Harbin Institute of Technology PhD student in Peking University.
Peking University
hywang2002's Stars
zju3dv/Motion-2-to-3
Code for "Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation", Arxiv 2024
EricGuo5513/momask-codes
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
GuyTevet/motion-diffusion-model
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
Mael-zys/T2M-GPT
(CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”
ChenFengYe/motion-latent-diffusion
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
mingyuan-zhang/MotionDiffuse
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
IDEA-Research/Motion-X
[NeurIPS 2023] Official implementation of the paper "Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset"
genforce/PedGen
Dataset and Code for Paper "Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels"
Genesis-Embodied-AI/Genesis
A generative world for general-purpose robotics & embodied AI learning.
bytedance/MoMA
MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
insait-institute/InTraGen
Official code for paper InTraGen: Trajectory-controlled Video Generation for Object Interactions
KwaiVGI/3DTrajMaster
[ARXIV'24] 3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation
jyuntins/harmony4d
[NeuRIPS, 2024] Multi-Human Dataset for Close Interactions.
Tencent/HunyuanVideo
HunyuanVideo: A Systematic Framework For Large Video Generation Model
WHU-USI3DV/VistaDream
[arXiv'24] VistaDream: Sampling multiview consistent images for single-view scene reconstruction
weihaox/awesome-digital-human
Digital Human Resource Collection: 2D/3D/4D human modeling, avatar generation & animation, clothed people digitalization, virtual try-on, and others.
minar09/awesome-virtual-try-on
A curated list of awesome research papers, projects, code, dataset, workshops etc. related to virtual try-on.
Zheng-Chong/Awesome-Try-On-Models
A repository for organizing papers, codes and other resources related to Virtual Try-on Models
KwaiVGI/SynCamMaster
[ARXIV'24] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints
alibaba/Tora
The official repository for paper "Tora: Trajectory-oriented Diffusion Transformer for Video Generation"
mira-space/MiraData
Official repo for paper "MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions"
jy0205/Pyramid-Flow
Code of Pyramidal Flow Matching for Efficient Video Generative Modeling
microsoft/TRELLIS
Official repo for paper "Structured 3D Latents for Scalable and Versatile 3D Generation".
weiqi-zhang/DiffGS
[NeurIPS'2024]: DiffGS: Functional Gaussian Splatting Diffusion
baaivision/See3D
You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale
G-U-N/Phased-Consistency-Model
[NeurIPS 2024] Boosting the performance of consistency models with PCM!
magic-research/piecewise-rectified-flow
PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator (NeurIPS 2024)
ZcsrenlongZ/Deblur4DGS
[arXiv 2024] Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video
Shiriluz/Word-As-Image
discus0434/aesthetic-predictor-v2-5
SigLIP-based Aesthetic Score Predictor