- added data parallel training (--dataparallel)
- added resuming from pre-trained checkpoint, even when num_domains of checkpoint is less than desired num_domains (--resume_ckpt /path/to/checkpoint.pt)
Original README:
StarGAN v2: Diverse Image Synthesis for Multiple Domains
Yunjey Choi*, Youngjung Uh*, Jaejun Yoo*, Jung-Woo Ha
In CVPR 2020. (* indicates equal contribution)
Paper: https://arxiv.org/abs/1912.01865
Video: https://youtu.be/0EVh5Ki4dIY
Abstract: A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain variations. The code, pre-trained models, and dataset are available at clovaai/stargan-v2.
The TensorFlow implementation of StarGAN v2 by our team member junho can be found at clovaai/stargan-v2-tensorflow.
Clone this repository:
git clone https://github.com/clovaai/stargan-v2.git
cd stargan-v2/
Install the dependencies:
conda create -n stargan-v2 python=3.6.7
conda activate stargan-v2
conda install -y pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.0 -c pytorch
conda install x264=='1!152.20180717' ffmpeg=4.0.2 -c conda-forge
pip install opencv-python==4.1.2.30 ffmpeg-python==0.2.0 scikit-image==0.16.2
pip install pillow==7.0.0 scipy==1.2.1 tqdm==4.43.0 munch==2.5.0
We provide a script to download datasets used in StarGAN v2 and the corresponding pre-trained networks. The datasets and network checkpoints will be downloaded and stored in the data
and expr/checkpoints
directories, respectively.
CelebA-HQ. To download the CelebA-HQ dataset and the pre-trained network, run the following commands:
bash download.sh celeba-hq-dataset
bash download.sh pretrained-network-celeba-hq
bash download.sh wing
AFHQ. To download the AFHQ dataset and the pre-trained network, run the following commands:
bash download.sh afhq-dataset
bash download.sh pretrained-network-afhq
After downloading the pre-trained networks, you can synthesize output images reflecting diverse styles (e.g., hairstyle) of reference images. The following commands will save generated images and interpolation videos to the expr/results
directory.
CelebA-HQ. To generate images and interpolation videos, run the following command:
python main.py --mode sample --num_domains 2 --resume_iter 100000 --w_hpf 1 \
--checkpoint_dir expr/checkpoints/celeba_hq \
--result_dir expr/results/celeba_hq \
--src_dir assets/representative/celeba_hq/src \
--ref_dir assets/representative/celeba_hq/ref
To transform a custom image, first crop the image manually so that the proportion of face occupied in the whole is similar to that of CelebA-HQ. Then, run the following command for additional fine rotation and cropping. All custom images in the inp_dir
directory will be aligned and stored in the out_dir
directory.
python main.py --mode align \
--inp_dir assets/representative/custom/female \
--out_dir assets/representative/celeba_hq/src/female
AFHQ. To generate images and interpolation videos, run the following command:
python main.py --mode sample --num_domains 3 --resume_iter 100000 --w_hpf 0 \
--checkpoint_dir expr/checkpoints/afhq \
--result_dir expr/results/afhq \
--src_dir assets/representative/afhq/src \
--ref_dir assets/representative/afhq/ref
To evaluate StarGAN v2 using Fréchet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS), run the following commands:
# celeba-hq
python main.py --mode eval --num_domains 2 --w_hpf 1 \
--resume_iter 100000 \
--train_img_dir data/celeba_hq/train \
--val_img_dir data/celeba_hq/val \
--checkpoint_dir expr/checkpoints/celeba_hq \
--eval_dir expr/eval/celeba_hq
# afhq
python main.py --mode eval --num_domains 3 --w_hpf 0 \
--resume_iter 100000 \
--train_img_dir data/afhq/train \
--val_img_dir data/afhq/val \
--checkpoint_dir expr/checkpoints/afhq \
--eval_dir expr/eval/afhq
Note that the evaluation metrics are calculated using random latent vectors or reference images, both of which are selected by the seed number. In the paper, we reported the average of values from 10 measurements using different seed numbers. The following table shows the calculated values for both latent-guided and reference-guided synthesis.
To train StarGAN v2 from scratch, run the following commands. Generated images and network checkpoints will be stored in the expr/samples
and expr/checkpoints
directories, respectively. Training takes about three days on a single Tesla V100 GPU. Please see here for training arguments and a description of them.
# celeba-hq
python main.py --mode train --num_domains 2 --w_hpf 1 \
--lambda_reg 1 --lambda_sty 1 --lambda_ds 1 --lambda_cyc 1 \
--train_img_dir data/celeba_hq/train \
--val_img_dir data/celeba_hq/val
# afhq
python main.py --mode train --num_domains 3 --w_hpf 0 \
--lambda_reg 1 --lambda_sty 1 --lambda_ds 2 --lambda_cyc 1 \
--train_img_dir data/afhq/train \
--val_img_dir data/afhq/val
We release a new dataset of animal faces, Animal Faces-HQ (AFHQ), consisting of 15,000 high-quality images at 512×512 resolution. The figure above shows example images of the AFHQ dataset. The dataset includes three domains of cat, dog, and wildlife, each providing about 5000 images. By having multiple (three) domains and diverse images of various breeds per each domain, AFHQ sets a challenging image-to-image translation problem. For each domain, we select 500 images as a test set and provide all remaining images as a training set. To download the dataset, run the following command:
bash download.sh afhq-dataset
The source code, pre-trained models, and dataset are available under Creative Commons BY-NC 4.0 license by NAVER Corporation. You can use, copy, tranform and build upon the material for non-commercial purposes as long as you give appropriate credit by citing our paper, and indicate if changes were made.
For business inquiries, please contact clova-jobs@navercorp.com.
For technical and other inquires, please contact yunjey.choi@navercorp.com.
If you find this work useful for your research, please cite our paper:
@inproceedings{choi2020starganv2,
title={StarGAN v2: Diverse Image Synthesis for Multiple Domains},
author={Yunjey Choi and Youngjung Uh and Jaejun Yoo and Jung-Woo Ha},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2020}
}
We would like to thank the full-time and visiting Clova AI Research members for their valuable feedback and an early review: especially Seongjoon Oh, Junsuk Choe, Muhammad Ferjad Naeem, and Kyungjune Baek.