/AnimeGAN

A Tensorflow implementation of AnimeGAN for fast photo animation ! This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.

Primary LanguagePython

AnimeGAN

A Tensorflow implementation of AnimeGAN for fast photo animation !!!


This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.

Some suggestions: since the real photos in the training set are all landscape photos, if you want to stylize the photos with people as the main body, you may as well add at least 3000 photos of people in the training set and retrain to obtain a new model.

News: AnimeGAN+ is expected to be released this summer. After some simple tricks were added to AnimeGAN, the obtained AnimeGAN+ has better animation effects. When I return to school to graduate, more pre-trained models and video animation test code will also be released in this repository.


Requirements

  • python 3.6.8
  • tensorflow-gpu 1.8
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19 or Pretrained model

vgg19.npy

Pretrained model

2. Download dataset

Link

3. Do edge_smooth

eg. python edge_smooth.py --dataset Hayao --img_size 256

3. Train

eg. python main.py --phase train --dataset Hayao --epoch 101 --init_epoch 1

4. Test

eg. python main.py --phase test --dataset Hayao
or python test.py --checkpoint_dir checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_3_10 --test_dir dataset/test/real --style_name H


Results

------> pictures from the paper 'AnimeGAN: a novel lightweight GAN for photo animation'



------> Photo to Hayao Style











Acknowledgment

This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.