Artwork Generation Using Deep Convolutional GAN, Conditional GAN and Creative Adversarial Network

This repository contains 3 GAN models to generate realistic artwork paintings. The models are implemented using PyTorch.

WikiArt Dataset

The WikiArt dataset can be downloaded from this link (resized to 64x64)

The original WikiArt dataset is contained in this repo.

References

The models in this repository are the implementations of the following papers:

  • Deep Convolutional GAN (DCGAN) Paper: A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convo-lutional generative adversarial networks,”arXiv preprint arXiv:1511.06434, 2015.

  • Conditional GAN (CGAN) Paper: M. Mirza and S. Osindero, “Conditional generative adversarial nets,”arXiv preprintarXiv:1411.1784, 2014.

  • Creative GAN (CAN) Paper: A. Elgammal, B. Liu, M. Elhoseiny, and M. Mazzone, “Can: Creative adversarial networks,generating “art” by learning about styles and deviating from style norms,”arXiv preprintarXiv:1706.07068, 2017.

This PyTorch tutorial was extremely helpful to develop our models.

Our project paper contains more detailed information and explanations about the architectures and results.

Models

  • DCGAN

The DCGAN architecture is our baseline for creating realistic artwork paintings. Below are some examples generated by our network.

  • CGAN

The CGAN architecture enables style-specific artwork generation by feeding the discriminator and the generator with artistic style labels. Below are some examples that belong to several artistic style classes.

  • CAN

The CAN architecture aims to generate style-ambiguous (or style-agnostic, 'creative') artwork pieces. The discriminator has access to artistic style labels. During training, the generator is punished if the discriminator correctly classifies the artistic style of a fake artwork. The generator is therefore pushed to generate more creative artwork that can't be classified into any of the artistic styles. Below are some creative fake artwork pieces generated by our network.

Usage

Three very straight-forward notebooks are available for each of the models. Run each cell of the notebook to train the corresponding architecture and visualize the results.