/Style-Transfer

An academic project based on CNNs (Convolutional Neural networks), two images are taken and one of the image's style is applied on another. It uses pre-trained VGG-19 model for feature and content extraction.

Primary LanguageJupyter Notebook

Style Transfer in PyTorch

In this notebook you can see how style from a certain image can be extracted and can be applied to the content of another image.

A 19-layer VGG Network, which is comprised of a series of convolutional, pooling layers, and a few fully-connected layers is used for feature and content extraction.
The convolutional layers are named by stack and their order in the stack.
Conv_1_1 is the first convolutional layer that an image is passed through, in the first stack.
Conv_2_1 is the first convolutional layer in the second stack.
The deepest convolutional layer in the network is conv_5_4.

Conv4_2 is responsible for content seperation.
Conv1_1, Conv2_1, Conv3_1, Conv4_1, Conv5_1 are responsible for style extraction from the image.