🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺🕺
- Stand still, take a photo from your front side, take a photo from your back side.
- Prepare you favourite dance video, split it to frames, and save into the
images
folder
Then you can get the motion_video.mp4
, you are dancing~
- input
front and back image
favourite dance video
motion_video.mp4
- output
motion_video.mp4
This project is built on the great and useful projects: textured_smplx, romp, smplify-x, humannerf
Since the complex dependence, basically you can refer to textured_smplx and romp
choose one way to run the code:
- run pipeline directly
- run step by step
python pipeline.py data/obj1 data/obj1/images/P01125-150055.jpg data/obj1/images/P01125-150146.jpg
# prepare a folder of frames in images/, if you have the video, try ffmpeg or use romp deal with video directly.
romp --mode=video --calc_smpl --render_mesh -i=images/ -o=romp_output/ -t -sc=1.
Then get npz with SMPL params sequences.
example can be find in ./data/obj1/images
openpose.bin --display 0 --render_pose 1 --image_dir ./data/obj1/images --write_json ./data/obj1/keypoints --write_images ./data/obj1/pose_images --hand --face
Please follow the instruction here
python smplifyx/main.py --config cfg_files/fit_smpl.yaml --data_folder ../data/obj1 --output_folder ../data/obj1/smpl --model_folder models --vposer_ckpt V02_05
data_folder
should contain images
folder and keypoints
folder in ../data/obj1
, and the output contain fitted obj
and pkl
(SMPL param relative)
run python demo.py data_path front_img back_img smplx
run python prepare_smpl_sequences
to get images of novel pose
save the images in motion_snapshots
folder
run ffmpeg
, for example:
ffmpeg -f image2 -i motion_snapshots/%06d.png motion_video.mp4