GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang
In ECCV 2020 (Spotlight)
An all-in-one GAN compression method integrating model distillation, channel pruning and quantization under GAN minimax optimization framework.
Image-to-image translation by (compressed) CycleGAN:
./download_dataset <dataset_name>
This will download the dataset to folder datasets/<dataset_name>
(e.g., datasets/summer2winter_yosemite
).
Use the official CycleGAN codes to train original dense CycleGAN.
Using the pretrained dense generator and discriminator to initialize G and D for GAN-Slimming is necessary on horse2zebra dataset. Downloaded the dense models for GS32 and GS8 from here and here respectively, and put them under the project root path.
Use the pretrained dense generator to generate style transfer results on training set and put the style transfer results to folder train_set_result/<dataset_name>
.
For example, train_set_result/summer2winter_yosemite/B/2009-12-06 06:58:39_fake.png
is the fake winter image transferred from the real summer image datasets/summer2winter_yosemite/A/2009-12-06 06:58:39.png
using the original dense CycleGAN.
GS-32:
python gs.py --rho 0.01 --dataset <dataset_name> --task <task_name>
GS-8:
python gs.py --rho 0.01 --quant --dataset <dataset_name> --task <task_name>
The training results (checkpoints, loss curves, etc.) will be saved in results/<dataset_name>/<task_name>
.
Valid <dataset_name>
s are: horse2zebra
, summer2winter_yosemite
.
Valid <task_name>
s are: A2B
, B2A
. (For example, horse2zebra/A2B
means transferring horse to zebra and horse2zebra/B2A
means transferring zebra to horse.)
GAN slimming has pruned some channels in the network by setting the channel-wise mask to zero. Now we need to extract the actual compressed subnetowrk.
python extract_subnet.py --dataset <dataset_name> --task <task_name> --model_str <model_str>
The extracted subnetworks will be saved in subnet_structures/<dataset_name>/<task_name>
python finetune.py --dataset <dataset_name> --task <task_name> --base_model_str <base_model_str>
Finetune results will be saved in finetune_results/<dataset_name>/<task_name>
Pretrained models are available through Google Drive.
If you use this code for your research, please cite our paper.
@inproceedings{wang2020ganslimming,
title={GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework},
author={Wang, Haotao and Gui, Shupeng and Yang, Haichuan and Liu, Ji and Wang, Zhangyang},
booktitle={European Conference on Computer Vision},
year={2020}
}
Please also check our concurrent work on combining neural architecture search (NAS) and model distillation for GAN compression:
Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, and Zhangyang Wang. "AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks." ICML, 2020. [pdf] [code]