Pytorch implementation of AnimeGAN for fast photo animation
- Paper: AnimeGAN: a novel lightweight GAN for photo animation - Semantic scholar or from Yoshino repo
- Original implementation in Tensorflow by Tachibana Yoshino
- Demo and Docker image on Replicate
Input | Animation |
---|---|
- Training notebook on Google colab
- Inference notebook on Google colab
wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
unzip anime-gan.zip -d /content
=> The dataset folder can be found in your current folder with name dataset
You need to have a video file located in your machine, for example: /home/ubuntu/Downloads/kimetsu_yaiba.mp4
Step 1. Create anime images from the video
python3 script/video_to_images.py --video-path /home/ubuntu/Downloads/kimetsu_yaiba.mp4\
--save-path dataset/Kimetsu/style\
--max-image 1800\
--image-size 256\
Step 2. Create edge-smooth version of dataset from Step 1.
python3 script/edge_smooth.py --dataset Kimetsu --image-size 256
To train the animeGAN from command line, you can run train.py
as the following:
python3 train.py --dataset Hayao\ # Can be Hayao, Shinkai, Kimetsu, Paprika, SummerWar or {your custom data in step 1.2}
--batch 6\
--init-epochs 4\
--checkpoint-dir {ckp_dir}\
--save-image-dir {save_img_dir}\
--save-interval 1\
--gan-loss lsgan\ # one of [lsgan, hinge, bce]
--init-lr 0.0001\
--lr-g 0.00002\
--lr-d 0.00004\
--wadvd 10.0\ # Aversarial loss weight for D
--wadvg 10.0\ # Aversarial loss weight for G
--wcon 1.5\ # Content loss weight
--wgra 3.0\ # Gram loss weight
--wcol 30.0\ # Color loss weight
--resume GD\ # if set, G to start from pre-trained G, GD to continue training GAN
--use_sn\ # If set, use spectral normalization, default is False
To convert images in a folder or single image, run inference_image.py
, for example:
--src and --dest can be a directory or a file
python3 inference_image.py --checkpoint {ckp_dir}\
--src /content/test/HR_photo\
--dest {working_dir}/inference_image_v2\
To convert a video to anime version, run inference_video.py
, for example:
Be careful when choosing --batch-size, it might lead to CUDA memory error if the resolution of the video is too large
python3 inference_video.py --checkpoint {ckp_dir}\
--src /content/test_vid_3.mp4\
--dest /content/test_vid_3_anime.mp4\
--batch-size 2
Anime transformation results (see more)
Input | Output(Hayao style) |
---|---|
- Add Google Colab
- Add implementation details
- Add and train on other data