XingliangJin
Master's student at Beijing Information Science & Technology University, focusing on character animation generation and stylization.
XingliangJin's Stars
instantX-research/InstantID
InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥
HugoBlox/hugo-blox-builder
🚨 GROW YOUR AUDIENCE WITH HUGOBLOX! 🚀 HugoBlox is an easy, fast no-code website builder for researchers, entrepreneurs, data scientists, and developers. Build stunning sites in minutes. 适合研究人员、企业家、数据科学家和开发者的简单快速无代码网站构建器。用拖放功能、可定制模板和内置SEO工具快速创建精美网站!
pengsida/learning_research
本人的科研经验
FoundationVision/VAR
[NeurIPS 2024 Oral][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
yuweihao/MambaOut
MambaOut: Do We Really Need Mamba for Vision?
AlonzoLeeeooo/awesome-text-to-image-studies
A collection of awesome text-to-image generation studies.
TianxingChen/Embodied-AI-Guide
具身智能中文指南
Dai-Wenxun/MotionLCM
[ ECCV 2024 ] MotionLCM: This repo is the official implementation of "MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model"
BarqueroGerman/FlowMDM
[CVPR 2024] Official Implementation of "Seamless Human Motion Composition with Blended Positional Encodings".
AIGAnimation/CAMDM
(SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"
LinghaoChan/UniMoCap
[Open-source Project] UniMoCap: community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL) and whole-body motion dataset (Motion-X).
roykapon/MAS
The official implementation of the paper "MAS: Multiview Ancestral Sampling for 3D Motion Generation Using 2D Diffusion"
Yi-Shi94/AMDM
Interactive Character Control with Auto-Regressive Motion Diffusion Models
nv-tlabs/stmc
Implementation of "Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation" from CVPR Workshop on Human Motion Generation 2024.
neu-vi/SMooDi
Kebii/TapMo
OpenMotionLab/MotionChain
MotionChain: Conversational Motion Controllers via Multimodal Prompts
dongzhuoyao/Diffusion-Representation-Learning-Survey-Taxonomy
cure-lab/MotionCraft
Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"
XingliangJin/MCM-LDM
[CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model
L-Sun/LGTM
LinghaoChan/OpenTMA
OpenTMA: support text-motion alignment for HumanML3D, Motion-X, and UniMoCap
ou524u/MotionCritic
Silverster98/Awesome-Human-Motion-Generation
A list of awesome human motion generation papers. Continuing to be updated!!!
raipranav384/CLIP-Head
Official Implementation of the Paper "CLIP-Head: Text-Guided Generation of Textured Neural Parametric 3D Head Models"
shuochengzhai/Infinite-Motion
KunhangL/finemotiondiffuse
Motion Generation from Fine-grained Textual Descriptions (LREC-COLING 2024)
wangxuanx/Face-Diffusion-Model
The official pytorch code for Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion
ZxyLinkstart/Automatic-Generation-of-3D-Scene-Animation
Code for Automatic Generation of 3D Scene Animation Based on Dynamic Knowledge Graphs and Contextual Encoding
AveryJohnsonJJ/DTT