PyTorch implementation for music-to-dance generation.
Copyright (C) 2020 NVIDIA Corporation. All rights reserved.
This work is made available under the Nvidia Source Code License (1-Way Commercial). To view a copy of this license, visit https://nvlabs.github.io/Dancing2Music/LICENSE.txt
Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz
Dancing to Music
In Neural Information Processing Systems (NeurIPS) 2019
For videos with audio, please visit our youtube video
- Generated Dance Sequences second row: music beats; third row: kinematic beats
- Multimodality Generation given same music and same initial poses
- Long Sequence
- Photo Realisitc Video
- Run the script
python train_decomp.py --name Decomp
- Run the script
python train_comp.py --name Decomp --decomp_snapshot DECOMP_SNAPSHOT
-
Run the script
python demo.py --decomp_snapshot DECOMP_SNAPSHOT --comp_snapshot COMP_SNAPSHOT --aud_path AUD_PATH --out_file OUT_FILE --out_dir OUT_DIR --thr THR
-
Flags
aud_path
: input .wav fileout_file
: location of output .mp4 fileout_dir
: directory for output framesthr
: threshold based on motion magnitudemodulate
: whether to do beat warping
-
For example
python demo.py -decomp_snapshot snapshot/Stage1.ckpt --comp_snapshot snapshot/Stage2.ckpt --aud_path demo/demo.wav --out_file demo/out.mp4 --out_dir demo/out_frame
If you find this code useful for your research, please cite our paper:
@inproceedings{lee2019dancing2music,
title={Dancing to Music},
author={Lee, Hsin-Ying and Yang, Xiaodong and Liu, Ming-Yu and Wang, Ting
-Chun and Lu, Yu-Ding and Yang, Ming-Hsuan and Kautz, Jan},
booktitle={NeurIPS},
year={2019}
}