/Dancing2Music

Primary LanguagePythonOtherNOASSERTION

Python 2.7 Python 3.6

Dancing to Music

PyTorch implementation for music-to-dance generation.

License

Copyright (C) 2020 NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License (1-Way Commercial). To view a copy of this license, visit https://nvlabs.github.io/Dancing2Music/LICENSE.txt

Paper

Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz
Dancing to Music
In Neural Information Processing Systems (NeurIPS) 2019

Example Videos

For videos with audio, please visit our youtube video

  • Generated Dance Sequences second row: music beats; third row: kinematic beats

  • Multimodality Generation given same music and same initial poses

  • Long Sequence

  • Photo Realisitc Video

Train Decomposition Stage

  • Run the script
    python train_decomp.py --name Decomp
    

Train Composition Stage

  • Run the script
    python train_comp.py --name Decomp --decomp_snapshot DECOMP_SNAPSHOT
    

Demo

  • Run the script

    python demo.py --decomp_snapshot DECOMP_SNAPSHOT --comp_snapshot
     COMP_SNAPSHOT --aud_path AUD_PATH --out_file OUT_FILE --out_dir OUT_DIR
     --thr THR
    
  • Flags

    • aud_path: input .wav file
    • out_file: location of output .mp4 file
    • out_dir: directory for output frames
    • thr: threshold based on motion magnitude
    • modulate: whether to do beat warping
  • For example

    python demo.py -decomp_snapshot snapshot/Stage1.ckpt --comp_snapshot
     snapshot/Stage2.ckpt --aud_path demo/demo.wav --out_file demo/out.mp4 
     --out_dir demo/out_frame
    

Citation

If you find this code useful for your research, please cite our paper:

@inproceedings{lee2019dancing2music,
  title={Dancing to Music},
  author={Lee, Hsin-Ying and Yang, Xiaodong and Liu, Ming-Yu and Wang, Ting
-Chun and Lu, Yu-Ding and Yang, Ming-Hsuan and Kautz, Jan},
  booktitle={NeurIPS},
  year={2019}
}