/artsificial

Generating art with neural networks

Primary LanguageJupyter NotebookMIT LicenseMIT

artsificial

art-gallery

Generating art with neural networks

This project uses the OpenVINO™ toolkit to deploy a deep learning solution for Art Generation with Compositional Pattern Producing Networks, while exploring the possibilities for the different patterns and color palettes that can be produced.

Introduction

Compositional Pattern Producing Networks (CPPNs) are a variation of Artificial Neural Networks with an architecture based in mathematical functions and guided by genetic algorithms. CPPNs may include many more functions than only Sigmoid and Gaussian and the choice of applied functions can be biased towards different types of produced patterns and regularities. For example, Linear functions can be applied in order to produce linear or fractal-like patterns. Furthermore, neuroevolution techniques like Neuroevolution of Augmented Topologies (NEAT) can be applied to evolve CPPNs.

Project's story

All of us had the chance to be selected for the Intel® Edge AI Scholarship Challenge at Udacity. Before that we had won the Secure and Private AI Scholarship Challenge by Facebook AI and were selected for a follow up Nanodegree Scholarship at Udacity, during which we successfully graduated from the Computer Vision Nanodegree.

During the Intel® Edge AI Scholaship Challenge, along with other scholars from the Secure and Private AI Scholarship, we formed a study group on Slack and kept communicating and sharing ideas, resources and concerns.

After the announcement for the Project Showcase Challenge, we started exploring the idea of creating a project in the Arts category. A source of inspiration was the Computer Vision Art Gallery, along with some other impressive projects of Contemporary Art which use AI.

We were fascinated by the Compositional Pattern Producing Networks and, since we didn't have any previous experience, we started learning more about them from the published resources, while exploring their possibilities. Finally, we agreed to use as a base source a PyTorch implementation of a CPPN, and modify it according to our desired outputs.

Tests & Example Outputs

Model 1

Sample 1: Greyscale
Scale: 0.1
Sample 2: Enhanced B&W
Scale: 0.1
Sample 3: RGB Palette
Scale: 0.3
Sample 4: High Resolution
Scale: 0.8

Model 2

Sample 1: 15 FPS
Scale: 0.3
Sample 2: 15 FPS
Scale: 0.3
Sample 3: 30 FPS
Scale: 0.1
Sample 1: 30 FPS
slow-change

Future Plans

We would like to work further on the improvement of the project and use it for artistic installations. One big goal could be to produce generative images on Raspberry Pi in real time.

Getting Started

In order to get a copy of this project up and running on your local machine for development and testing purposes, please follow the instructions below.

Prerequisites

For Windows 10

  • Microsoft Visual Studio* with C++ 2019, 2017, or 2015 with MSBuild
  • CMake 3.4 or higher 64-bit
  • Python 3.6.5 64-bit

Installation

  • Install the respective Intel® Distribution of OpenVINO™ toolkit for Windows 10 / Mac OS / Linux, following all the required steps on the documentation guide.

How to run the project

  • setup the environment variables to run the openVINO application
source /opt/intel/openvino/bin/setupvars.sh
  • command line arguments
usage:
Run inference [-h] --model MODEL [--device DEVICE] [---fps FPS][seconds SECONDS] 
[--img_size IMG_SIZE] [--scale SCALE] [--pattern_change_speed PATTERN_CHANGE_SPEED][--save_frames]

required arguments:
--model MODEL                                   The location of the model XML file
--device DEVICE                                 The devices on which the inference should be performed [CPU, GPU, FPGA, MYRIAD, HETERO:CPU,GPU]

optional arguments:
--fps FPS                                       Number of Frames Per Second for the video
--seconds SECONDS                               The duration of the video in seconds
--img_size IMG_SIZE                             The width or height for the frame to be generated
--scale SCALE                                   Scale factor for inputs
--pattern_change_speed PATTERN_CHANGE_SPEED     The rate of flow/change of the pattern
--save frames                                   Save the individual frame generated as PNG images
  • Sample commands for running

With the compulsory argument:

python ppn_app.py --model "models\ppn-model-2.xml" --device HETERO:CPU,GPU

With the optional arguments:

python ppn_app.py --model "models\ppn-model-2.xml" --device HETERO:CPU,GPU --fps 15 --seconds 15 --scale 0.2 --pattern_change_speed 0.6 --save_frames

Implementation on Raspberry Pi + Intel NCS2

For running the code on Raspberry Pi combined with Intel NCS2 the following line has to be changed:

plugin.load_model("models/ppn-model-1.xml", "MYRIAD") #was "CPU"

Overview

Built with

  • PyTorch - For AI model development
  • ONNX - To convert the model for use with the Inference Engine
  • OpenVINO™ toolkit - For the application development

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Authors

Evi Giannakou, Susanne Brockmann, Kapil Chandorikar

Connect with us on LinkedIn

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgements

References