/SURF2017

Style Transfer based on Generative Adversarial Network

SURF2017

This is a project of XJTLU 2017 Summer Undergraduate Research Fellowship, it aims at designing a generative adversarial network to implement style transfer from a style image to content image. Related literature could be viewed from Wiki

1. Overview

Neural Style Transfer is one of the cutting-edge topic in deep learning field. It aims at transfer the style of an image to content image, which could be sketch or colored.

Our goal is to implement the neural style transfer by using cycleGAN. At the same time, we also want to take one step further by using CAN, which could generate image itself after a well-feed training process.

2. Framework

acGAN

Auxiliary conditional GAN network is used with residual U-net with two guided decoders on generator side. The discriminator consists of deConvNets, which minimize the difference the generated image with ground-truth.

VGG-19 Pretrained Very Deep Network

This file is essential for the network, the download link could be viewed from here.The VGG19 is a pretrained very deep ConvNets that could be used directly. Similar pretrained models such as Resnet, VGG16 will be tested.

Resnet

A profound and very laconic architecture, the origin paper could be viewed here.

3. Results

Completed image results are listed below. These results are obtained from cGAN and cycleGAN with small training epochs. They will be more accurate if trained sufficiently.

result1

result1

For poster, please see this link. Codes are going to be open-sourced soon.