SCGAN

The search code will be published once the paper is accepted, and the training code and network weights will be published immediately.

Code used for "SCGAN: Sampling and Clustering-based Neural Architecture Search for GANs".

Introduction

We've desinged a evolutionary neural architecture search algorithm for generative adversarial networks (GANs), dubbed SCGAN. Experiments validate the effectiveness of SCGAN on the task of unconditional image generation. Extensive experiments on the CIFAR-10 and STL-10 datasets demonstrated that T-EAGAN only requires 0.42 GPU days to find out a superior GAN architecture in a search space including approximately 1015 network architectures. Our best-found GAN outperformed those obtained by other neural architecture search methods with performance metric results (IS=9.68±0.06, FID=5.54) on CIFAR-10 and (IS=12.12±0.13, FID=12.54) on STL-10.

Framework

Fig:framework for SCGAN

Performance

picture1

picture2

Set-Up

1.environment requirements:

The search environment is consistent with AlphaGAN,to run this code, you need:

  • PyTorch 2.0
  • TensorFlow 2.12.0
  • cuda 12.0

Other requirements are in environment.yaml

conda env create -f environment.yaml

2.prepare fid statistic file

you need to create "fid_stat" directory and download the statistical files of real images.

mkdir fid_stat

How to search the architecture by yourself

1. Search on CIFAR-10

bash EAGAN_Only_G30.sh

How to train the discovered architecture reported in the paper

1. Fully train GAN on CIFAR-10

bash ./scripts/train_arch_cifar10.sh

2. Fully train GAN on STL-10

bash ./scripts/train_arch_stl10.sh

How to test the discovered architecture reported in the paper

1. Fully train GAN on CIFAR-10

bash ./scripts/test_arch_cifar10.sh

2. Fully train GAN on STL-10

bash ./scripts/test_arch_stl10.sh

Acknowledgement

Some of the codes are built by:

1.EAGAN

2.AlphaGAN

3.Inception Score code from OpenAI's Improved GAN (official).

4.FID Score code and CIFAR-10 statistics file from (official).

Thanks them for their great works!