Paper Title: "End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks"
ArXiv: https://arxiv.org/abs/2011.05552
Abstract:
Current GAN-based art generation methods produce unoriginal artwork due to their dependence on conditional input. Here, we propose Sketch-And-Paint GAN (SAPGAN), the first model which generates Chinese landscape paintings from end to end, without conditional input. SAPGAN is composed of two GANs: SketchGAN for generation of edge maps, and PaintGAN for subsequent edge-to-painting translation. Our model is trained on a new dataset of traditional Chinese landscape paintings never before used for generative research. A 242-person Visual Turing Test study reveals that SAPGAN paintings are mistaken as human artwork with 55% frequency, significantly outperforming paintings from baseline GANs. Our work lays a groundwork for truly machine-original art generation.
Sketch-And-Paint GAN, compared with baseline models:
Here, we provide the dataset used to train our Sketch-And-Paint GAN model. The dataset consists of 2,192 high-quality traditional Chinese landscape paintings (**山水画). All paintings are sized 512x512, from the following sources:
- Princeton University Art Museum, 362 paintings
- Harvard University Art Museum, 101 paintings
- Metropolitan Museum of Art, 428 paintings
- Smithsonian's Freer Gallery of Art, 1,301 paintings
For more details about dataset collection methodology, please see the paper.
Please cite the paper if you choose to use this dataset for your research.
@misc{xue2020endtoend,
title={End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks},
author={Alice Xue},
year={2020},
eprint={2011.05552},
archivePrefix={arXiv},
primaryClass={cs.CV}
}