An efficient PyTorch library for deep generative modeling. May the Generative Force (GenForce) be with You.
- Encoder Training: We support training encoders on top of pre-trained GANs for GAN inversion.
- Model Converters: You can easily migrate your already started projects to this repository. Please check here for more details.
- Distributed training framework.
- Fast training speed.
- Modular design for prototyping new models.
- Model zoo containing a rich set of pretrained GAN models, with Colab live demo to play.
-
Create a virtual environment via
conda
.conda create -n genforce python=3.7 conda activate genforce
-
Install
cuda
andcudnn
. (We useCUDA 10.0
in case you would like to useTensorFlow 1.15
for model conversion.)conda install cudatoolkit=10.0 cudnn=7.6.5
-
Install
torch
andtorchvision
.pip install torch==1.7 torchvision==0.8
-
Install requirements
pip install -r requirements.txt
We provide a quick training demo, scripts/stylegan_training_demo.py
, which allows to train StyleGAN on a toy dataset (500 animeface images with 64 x 64 resolution). Try it via
./scripts/stylegan_training_demo.sh
We also provide an inference demo, synthesize.py
, which allows to synthesize images with pre-trained models. Generated images can be found at work_dirs/synthesis_results/
. Try it via
python synthesize.py stylegan_ffhq1024
You can also play the demo at Colab.
Pre-trained models can be found at model zoo.
-
On local machine:
GPUS=8 CONFIG=configs/stylegan_ffhq256_val.py WORK_DIR=work_dirs/stylegan_ffhq256_val CHECKPOINT=checkpoints/stylegan_ffhq256.pth ./scripts/dist_test.sh ${GPUS} ${CONFIG} ${WORK_DIR} ${CHECKPOINT}
-
Using
slurm
:CONFIG=configs/stylegan_ffhq256_val.py WORK_DIR=work_dirs/stylegan_ffhq256_val CHECKPOINT=checkpoints/stylegan_ffhq256.pth GPUS=8 ./scripts/slurm_test.sh ${PARTITION} ${JOB_NAME} \ ${CONFIG} ${WORK_DIR} ${CHECKPOINT}
All log files in the training process, such as log message, checkpoints, synthesis snapshots, etc, will be saved to the work directory.
-
On local machine:
GPUS=8 CONFIG=configs/stylegan_ffhq256.py WORK_DIR=work_dirs/stylegan_ffhq256_train ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR} \ [--options additional_arguments]
-
Using
slurm
:CONFIG=configs/stylegan_ffhq256.py WORK_DIR=work_dirs/stylegan_ffhq256_train GPUS=8 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} \ ${CONFIG} ${WORK_DIR} \ [--options additional_arguments]
-
On local machine:
GPUS=8 CONFIG=configs/stylegan_ffhq256_encoder_y.py WORK_DIR=work_dirs/stylegan_ffhq256_encoder_y ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR} \ [--options additional_arguments]
-
Using
slurm
:CONFIG=configs/stylegan_ffhq256_encoder_y.py WORK_DIR=work_dirs/stylegan_ffhq256_encoder_y GPUS=8 ./scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} \ ${CONFIG} ${WORK_DIR} \ [--options additional_arguments]
Member | Module |
---|---|
Yujun Shen | models and running controllers |
Yinghao Xu | runner and loss functions |
Ceyuan Yang | data loader |
Jiapeng Zhu | evaluation metrics |
Bolei Zhou | cheerleader |
NOTE: The above form only lists the person in charge for each module. We help each other a lot and develop as a TEAM.
We welcome external contributors to join us for improving this library.
The project is under the MIT License.
We thank PGGAN, StyleGAN, StyleGAN2, StyleGAN2-ADA for their work on high-quality image synthesis. We thank IDInvert and GHFeat for their contribution to GAN inversion. We also thank MMCV for the inspiration on the design of controllers.
We open source this library to the community to facilitate the research of generative modeling. If you do like our work and use the codebase or models for your research, please cite our work as follows.
@misc{genforce2020,
title = {GenForce},
author = {Shen, Yujun and Xu, Yinghao and Yang, Ceyuan and Zhu, Jiapeng and Zhou, Bolei},
howpublished = {\url{https://github.com/genforce/genforce}},
year = {2020}
}