A composable GAN API and CLI. Built for developers, researchers, and artists.
0.10 preview is not available in pip. You must install from the source. Installation instructions and support are available in our discord
HyperGAN is currently in open beta.
Logos generated with examples/colorizer
See more on the hypergan youtube
StyleGAN is currently state of the art. It is very cool.
- About
- Showcase
- Documentation
- Changelog
- Quick start
- The pip package hypergan
- Training
- Sampling
- API
- Datasets
- Contributing
- Versioning
- Sources
- Papers
- Citation
Generative Adversarial Networks consist of 2 learning systems that learn together. HyperGAN implements these learning systems in Tensorflow with deep learning.
For an introduction to GANs, see http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/
HyperGAN is a community project. GANs are a very new and active field of research. Join the community discord.
- Community project
- Unsupervised learning
- Transfer learning
- Online learning
- Dataset agnostic
- Reproducible architectures using json configurations
- Domain Specific Language to define custom architectures
- GUI(pygame and tk)
- API
- CLI
See the full changelog here: Changelog.md
Recommended: GTX 1080+
pip3 install hypergan --upgrade
To see that tensorflow and hypergan are installed correctly and have access to devices, please run:
hypergan test
If you use virtualenv:
virtualenv --system-site-packages -p python3 hypergan
source hypergan/bin/activate
If installation fails try this.
pip3 install numpy tensorflow-gpu hyperchamber pillow pygame
If the above step fails see the dependency documentation:
- tensorflow - https://www.tensorflow.org/install/
- pygame - http://www.pygame.org/wiki/GettingStarted
hypergan new mymodel
This will create a mymodel.json based off the default configuration. You can change configuration templates with the -c
flag.
hypergan new mymodel -l
See all configuration templates with --list-templates
or -l
.
# Train a 32x32 gan with batch size 32 on a folder of folders of pngs, resizing images as necessary
hypergan train folder/ -s 32x32x3 -f png -c mymodel --resize
If you wish to modify hypergan
git clone https://github.com/hypergan/hypergan
cd hypergan
python3 setup.py develop
Make sure to include the following 2 arguments:
CUDA_VISIBLE_DEVICES= hypergan --device '/cpu:0'
Don't train on CPU! It's too slow.
hypergan -h
# Train a 32x32 gan with batch size 32 on a folder of pngs
hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name]
# Train a 256x256 gan with batch size 32 on a folder of pngs
hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name] --sampler static_batch --sample_every 5 --save_samples
By default hypergan will not save samples to disk. To change this, use --save_samples
.
One way a network learns:
To create videos:
ffmpeg -i samples/%06d.png -vcodec libx264 -crf 22 -threads 0 gan.mp4
To see a detailed list, run
hypergan -h
See the example documentation https://github.com/hypergan/HyperGAN/tree/master/examples
To build a new network you need a dataset. Your data should be structured like:
[folder]/[directory]/*.png
Datasets in HyperGAN are meant to be simple to create. Just use a folder of images.
[folder]/*.png
For jpg(pass -f jpg
)
- Loose images of any kind can be used
- CelebA aligned faces http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
- MS Coco http://mscoco.org/
- ImageNet http://image-net.org/
- youtube-dl (see examples/Readme.md)
To convert and resize your data for processing, you can use imagemagick
for i in *.jpg; do convert $i -resize "300x256" -gravity north -extent 256x256 -format png -crop 256x256+0+0 +repage $i-256x256.png;done
Contributions are welcome and appreciated! We have many open issues in the Issues tab. Join the discord.
HyperGAN uses semantic versioning. http://semver.org/
TLDR: x.y.z
- x is incremented on stable public releases.
- y is incremented on API breaking changes. This includes configuration file changes and graph construction changes.
- z is incremented on non-API breaking changes. z changes will be able to reload a saved graph.
- GAN - https://arxiv.org/abs/1406.2661
- DCGAN - https://arxiv.org/abs/1511.06434
- InfoGAN - https://arxiv.org/abs/1606.03657
- Improved GAN - https://arxiv.org/abs/1606.03498
- Adversarial Inference - https://arxiv.org/abs/1606.00704
- Energy-based Generative Adversarial Network - https://arxiv.org/abs/1609.03126
- Wasserstein GAN - https://arxiv.org/abs/1701.07875
- Least Squares GAN - https://arxiv.org/pdf/1611.04076v2.pdf
- Boundary Equilibrium GAN - https://arxiv.org/abs/1703.10717
- Self-Normalizing Neural Networks - https://arxiv.org/abs/1706.02515
- Variational Approaches for Auto-Encoding Generative Adversarial Networks - https://arxiv.org/pdf/1706.04987.pdf
- CycleGAN - https://junyanz.github.io/CycleGAN/
- DiscoGAN - https://arxiv.org/pdf/1703.05192.pdf
- Softmax GAN - https://arxiv.org/abs/1704.06191
- The Cramer Distance as a Solution to Biased Wasserstein Gradients - https://arxiv.org/abs/1705.10743
- Improved Training of Wasserstein GANs - https://arxiv.org/abs/1704.00028
- More...
- DCGAN - https://github.com/carpedm20/DCGAN-tensorflow
- InfoGAN - https://github.com/openai/InfoGAN
- Improved GAN - https://github.com/openai/improved-gan
HyperGAN Community
HyperGAN, (2016-2019+),
GitHub repository,
https://github.com/HyperGAN/HyperGAN
HyperGAN comes with no warranty or support.