/Unimotion

Pytorch implementation of Unimotion: Unifying 3D Human Motion Synthesis and Understanding.

Unimotion: Unifying 3D Human Motion Synthesis and Understanding

Unimotion: Unifying 3D Human Motion Synthesis and Understanding
Chuqiao Li, Julian Chibane, Yannan He, Naama Pearl, Andreas Geiger, Gerard Pons-Moll
[Project Page] [Paper]

Arxiv, 2024

News 🚩

  • [2024/09/30] Unimotion paper is available on ArXiv.
  • [2024/09/30] Code and pre-trained weights will be released soon.

Key Insight

  • Alignment between frame-level text and motion enables the temproal semantic awareness of the motion generation!
  • Separate diffusion process for aligned motion and text enables multi-directional inference!
  • Our model allows Multiple Novel Applications:
    • Hierarchical Control: Allowing users to specify motion at different levels of detail
    • Motion Text Generation: Obtaining motion text descriptions for existing MoCap data or YouTube videos
    • Motion Editing: Allowing for editability, generating motion from text, and editing the motion via text edits

Citation

When using the code/figures/data/etc., please cite our work

@article{li2024unimotion,
  author    = {Li, Chuqiao and Chibane, Julian and He, Yannan and Pearl, Naama and Geiger, Andreas and Pons-Moll, Gerard},
  title     = {Unimotion: Unifying 3D Human Motion Synthesis and Understanding},
  journal   = {arXiv preprint arXiv:2409.15904},
  year      = {2024},
}