/Pix2Pix

Real-time pix2pix implementation with Unity

Primary LanguageC#

Pix2Pix for Unity

This is an attempt to run pix2pix (image-to-image translation with deep neural network) in real time with Unity. It contains its own implementation of an inference engine, so it doesn't require installation of other neural network frameworks.

Sketch Pad demo

screenshot screenshot

Sketch Pad is a demonstration that resembles the famous edges2cats demo but in real time. You can download a pre-built binary from the Releases page.

Demo video

System requirements

  • Unity 2018.1
  • Compute shader capability (DX11, Metal, Vulkan, etc.)

Although it's implemented in a platform agnostic fashion, many parts of it are optimized for NVIDIA GPU architectures. To run the Sketch Pad demo flawlessly, it's highly recomended to use a Windows system with GeForce GTX 1070 or greater.

How to use a trained model

This repository doesn't contain any trained model to save the bandwidth and storage quota. To run the example project on Unity Editor, download the pre-trained edges2cats model and copy it into Assets/StreamingAssets.

This implementation only supports the .pict weight data format which is used in Christopher Hesse's interactive demo. You can pick one of the pre-trained models or train your own model with using pix2pix-tensorflow. To export weight data from a checkpoint, please see the description in the export-checkpoint.py script.