A versatile GAN(generative adversarial network) implementation focused on scalability and ease-of-use.
- default to
encode_periodic_gaussian
encoder - default to
pyramid_no_stride
discriminator
- default to
dense_resize_conv
generator - better defaults when creating a new configuration
- 3 new encoders
- New discriminator:
densenet
- based loosely on https://arxiv.org/abs/1608.06993 - Updated discriminator:
pyramid_no_stride
-conv
andavg_pool
together - New generator:
dense_resize_conv
- original type of generator that seems to work well - Updated generator:
resize_conv
- standard resize-conv generator. This works much better thandeconv
, which is not supported. - Several quality of life improvements
- Support for multiple discriminators
- Support for discriminators on different image resolutions
- fixed configuration save/load
- cleaner cli output
- documentation cleanup
- pip package released!
- Better defaults. Good variance. 256x256. The broken images showed up after training for 5 days.
- Initial private release
- For 256x256, we recommend a GTX 1080 or better. 32x32 can be run on lower-end GPUs.
- CPU mode is extremely slow. Never train with it!
pip install hypergan --upgrade
# Train a 32x32 gan with batch size 32 on a folder of pngs
hypergan train [folder] -s 32x32x3 -f png -b 32
On ubuntu sudo apt-get install libgoogle-perftools4
and make sure to include this environment variable before training
LD_PRELOAD="/usr/lib/libtcmalloc.so.4" hypergan train my_dataset
If you wish to modify hypergan
git clone https://github.com/255BITS/hypergan
cd hypergan
python3 setup.py develop
Make sure to include the following 2 arguments:
CUDA_VISIBLE_DEVICES= hypergan --device '/cpu:0'
To build a new network you need a dataset. Your data should be structured like:
[folder]/[directory]/*.png
If you don't have a dataset, you can use http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.
# Train a 256x256 gan with batch size 32 on a folder of pngs
hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name]
Configs and saves are located in:
~/.hypergan/
Each directory in your dataset represents a classification. Using supervised learning mode will turn your discriminator
into a classifier
.
Same as supervised, except only include 1 directory in your dataset.
Configuration in HyperGAN uses JSON files. You can create a new config by running hypergan train
. By default, configurations are randomly generated using Hyperchamber.
--config [name]
Naming a configuration during training is recommended. If your config is not named, a uuid will be used.
Build takes the same arguments as train and builds a generator. It's required for serve.
Building does 2 things:
- Loads the training model, which include the discriminator
- Saves into a ckpt model containing only the generator
Serve starts a flask server. You can then access:
http://localhost:5000/sample.png?type=batch
Saves are stored in ~/.hypergan/saves/
They can be large.
--format <type>
Type can be one of:
- jpg
- png
To see a detailed list, run
hypergan -h
- -s, --size, optional(default 64x64x3), the size of your data in the form 'width'x'height'x'channels'
- -f, --format, optional(default png), file format of the images. Only supports jpg and png for now.
The discriminators job is to tell if a piece of data is real or fake. In hypergan, a discriminator can also be a classifier.
You can combine multiple discriminators in a single GAN.
Progressive enhancement is enabled by default:
Default.
Progressive enhancement is disabled for technical reasons.
Note: This is currently broken
For Vae-GANs
Default
Standard resize-conv.
Default. Inspired by densenet.
Default.
Experimental.
One way a network learns:
To create your own visualizations, you can use the flag:
--frame_sample grid
To turn these images into a video:
ffmpeg -i samples/grid-%06d.png -vcodec libx264 -crf 22 -threads 0 gan.mp4
NOTE: z_dims must equal 2 and batch size must equal 32 to work.
Generative Adversarial Networks(2) consist of (at least) two neural networks that learn together over many epochs. The discriminator learns the difference between real and fake data. The generator learns to create fake data.
For a more in-depth introduction, see here http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/
A single fully trained GAN
consists of the following useful networks:
generator
- Generates content that fools thediscriminator
.discriminator
- Gives a value between 0 and 1 designating howreal
the input data is.classifier
- Similar to a normal softmax classifier, has certain advantages.
- GAN - https://arxiv.org/abs/1406.2661
- DCGAN - https://arxiv.org/abs/1511.06434
- InfoGAN - https://arxiv.org/abs/1606.03657
- Improved GAN - https://arxiv.org/abs/1606.03498
- Adversarial Inference - https://arxiv.org/abs/1606.00704
- DCGAN - https://github.com/carpedm20/DCGAN-tensorflow
- InfoGAN - https://github.com/openai/InfoGAN
- Improved GAN - https://github.com/openai/improved-gan
- Hyperchamber - https://github.com/255bits/hyperchamber
Our pivotal board is here: https://www.pivotaltracker.com/n/projects/1886395
Contributions are welcome and appreciated. To help out, just issue a pull request.
Also, if you create something cool with this let us know!
If you wish to cite this project, do so like this:
255bits (M. Garcia),
HyperGAN, (2017),
GitHub repository,
https://github.com/255BITS/HyperGAN