/StyleFace

A web application to generate and alter faces using Generative adversarial networks.

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

Python 3.6 TensorFlow 1.14.0 CUDA Toolkit 10.0 cuDNN 7.6.5 License AGPLv3

StyleFace

This project is a web application to generate and alter faces (optionally other objects) using Generative adversarial networks. We use NVIDIAs StyleGAN_v2 architecture to generate and alter faces. However this project is more focused on removal of bouble artifacts which were usually generated on images generated with earlier models.

NOTE: This project is in the final stages of development. All files will be finalized and uploaded soon.

Requirements

  • You need python 3.6+ to run this project.
  • Both Linux and Windows are supported. Linux is recommended for performance and compatibility reasons.
  • 64-bit Python 3.6 installation. It is recommended to use Anaconda3 with numpy 1.14 or newer.
  • TensorFlow 1.14 or 1.15 with GPU support. The code does not support TensorFlow 2.0 yet...
  • On Windows, you need to use TensorFlow 1.14, TensorFlow 1.15 or higher versions may not work.
  • One or more high-end NVIDIA GPUs, NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5. To reproduce the results reported in the paper, you need an NVIDIA GPU with at least 16 GB of DRAM.

Library dependency

You can get a virtual environment file from this Google Drive link. You will need to ask the author for the password.
mohit.gupta2jly@gmail.com
Extract the RAR file in the root directory and, type in your console:

  styleEnv\Scripts\activate

and press enter. Now you can run the project from this activated environment.
Alternatively, you can run install_dependencies.py file, it will automatically install all dependencies.

Pretrained weights

You can download pre-trained weights from this OneDrive link. You will need to ask the author for the password.
This will download a ~10.1GB file. Extract it in the 'models' directory.

Sample results

StyleFace can transform a source image into an output image reflecting the style (e.g., hairstyle and makeup) of a given reference image.