Created by Liangjian Chen.
I analysied the video on Windows 10 and train the model on Ubuntu 16.04
Part of the project is inherited from:
nyoki-mtl pytorch-EverybodyDanceNow
Lotayou everybody_dance_now_pytorch
-
Source video can be download from 1.video
-
Target video can be download from 2.video
-
Download OpenPose release from here
Paste 1.mp4 and 2.mp4 under the folder of Open pose release and Run
./build/examples/openpose/openpose.bin --video 1.mp4 --write_json anno_1/ --display 0 --render_pose 0 --face --hand
and
./build/examples/openpose/openpose.bin --video 2.mp4 --write_json anno_2/ --display 0 --render_pose 0 --face --hand
to get the pose annotation from video
-
Download vgg19-dcbb9e9d.pth.crdownload here and put it in
./src/pix2pixHD/models/
-
Download pre-trained vgg_16 for face enhancement here and put in
./face_enhancer/
This step is completed in Ubuntu 16.04
-
put the
2.mp4
andanno_2
in./data/1
and rename it tovideo.mp4
andanno
-
Run
python target.py --name 1
-
put the
1.mp4
andanno_1
in./data/1
and rename it tovideo.mp4
andanno
-
Run
python source.py --name 1 --which_train 2
-
source.py
rescales the label and save it in./data/2/test/
-
Run
python train_pose2vid_temporal.py
and check loss and full training process in./checkpoints/
-
If you break the traning and want to continue last training, set
load_pretrain = './checkpoints/target/
in./src/config/train_opt.py
-
Run
transfer.py
and get results in./result
- Run
python ./Face_GAN/prepare_Data
and check the results in./Face_GAN/data/
. - Run
python ./Face_GAN/train_face_gan.py
train face enhancer and run./Face_GAN/Inference.py
to gain results
- Run
python transfer_temporal.py
and make result pictures to video
- Pose estimation
- Pose
- Face
- Hand
- pix2pixHD
- FaceGAN
- Temporal smoothing
Ubuntu 16.04
Python 3.6.5
Pytorch 0.4.1
OpenCV 3.4.4