/stylegan2_latent_editor

Editor to change StyleGAN2 images manipulating latent W vector. Based on StyleFlow and GANSpace frameworks.

Primary LanguagePython

Editor to change StyleGAN2 images manipulating latent W vector. Based on StyleFlow and GANSpace frameworks.

Python 3.7 pytorch 1.1.0 TensorFlow 1.15.0 Torchdiffeq 0.0.1

Open In Colab

teaser

This repository is heavily based on StyleFlow and GANSpace repositories. Actually, it just combines two of them.

StyleFlow provides very disentangled attribute change, while GANSpace offer opportunity to discover new attributes without additional training. For details of how they work, please, refer to corresponding papers.

Available attributes from StyleFlow (based on Continous Normalizing Flows(CNF)):

  • gender
  • glasses
  • head yaw
  • head pitch
  • baldness
  • beard
  • age
  • face expression (smile)

There's also available light attributes, but they don't seem impressive to me.

Available GANSpace attributes (based on PCA components):

  • also baldness to compare
  • hair color
  • eyes size (change nationality east asian - european)
  • eyes openness
  • eyebrow thickness
  • lipstick and makeup
  • open mouth
  • skin tone (more like with/without tan)

Installation

Clone this repo.

git clone https://github.com/ziviland/stylegan2_latent_editor.git
cd stylegan2_latent_editor/

This code requires PyTorch(for CNF), TensorFlow(for StyleGAN2), Torchdiffeq, and Python 3+ Please install dependencies by

pip install -r requirements.txt

This version of StyleGAN2 relies on TensorFlow 1.x.

Installation (Docker)

Didn't test, but may work.

git clone https://github.com/ziviland/stylegan2_latent_editor.git
cd stylegan2_latent_editor/
docker-compose up --build

You must have CUDA (>=10.0 && <11.0) and nvidia-docker2 installed first !

License

License according to StyleFlow(CC BY-NC-SA 4.0) and GANSpace(Apache License 2.0) repositories

Acknowledgments

This repository is heavily based on StyleFlow and GANSpace frameworks.

StyleFlow implementation builds upon the awesome work done by Karras et al. (StyleGAN2), Chen et al. (torchdiffeq) and Yang et al. (PointFlow).