This code provides an initial version for the implementation dance generation model of the TIP2021 paper "DanceIt: Music-inspired dancing video synthesis". This project are still under construction.
pip install -r requirement.txt
-
Download the dataset and unzip them in your customized path.
-
Build the train_data.json and truth_data.json by running DataProcess.py.
-
Download the dataset and unzip them in your customized path.
-
Use OpenPose. to extract keypoints (from dance videos).
-
Run preprocess/audio.py to extract the audio of the videos.
-
Run preprocess/audiomfcc.py to extract audio features.
-
Run preprocess/CombineAudioDance.py to build the train_data.json and truth_data.json.
Run main.py for training.
Download the preprocessed data to test.
python main.py --audio_file audio_path --test_model checkpoint/best_model_db.pth
The project is still in the process of further optimization. The initial version was used to verify the validity of our method.
Please remember to cite us if u find this useful.
@article{guo2021danceit,
title={DanceIt: Music-inspired Dancing Video Synthesis},
author={Guo, Xin and Zhao, Yifan and Li, Jia},
journal={IEEE Transactions on Image Processing},
year={2021},
publisher={IEEE}
}
The code of the paper is freely available for non-commercial purposes. Permission is granted to use the code given that you agree:
-
That the code comes "AS IS", without express or implied warranty. The authors of the code do not accept any responsibility for errors or omissions.
-
That you include necessary references to the paper [1] in any work that makes use of the code.
-
That you may not use the code or any derivative work for commercial purposes as, for example, licensing or selling the code, or using the code with a purpose to procure a commercial gain.
-
That you do not distribute this code or modified versions.
-
That all rights not expressly granted to you are reserved by the authors of the code.