Interactive Web App that performs facial attributes modifications in front face images.
This app makes use of STGAN, which is based in AttGAN.
Running the application can be done following the instructions above:
-
To create a Python Virtual Environment (virtualenv) to run the code, type:
python3 -m venv my-env
-
Activate the new environment:
- Windows:
my-env\Scripts\activate.bat
- macOS and Linux:
source my-env/bin/activate
- Windows:
-
Install all the dependencies from requirements.txt:
pip3 install -r requirements.txt
If you're a conda user, you can create an environment from the environment.yml
file using the Terminal or an Anaconda Prompt for the following steps:
-
Create the environment from the
environment.yml
file:conda env create -f environment.yml
-
Activate the new environment:
- Windows:
activate stgan
- macOS and Linux:
source activate stgan
- Windows:
-
Verify that the new environment was installed correctly:
conda list
You can also clone the environment through the environment manager of Anaconda Navigator.
It's mandatory to download the pretrained model from Google Drive or Baidu Cloud (4qeu) and unzip the files to the model/ directory. The final directory structure should end up looking like this:
model
│ README.md
│ setting.txt
│
└───checkpoints
Within the virtual environment:
streamlit run app.py
A web application will open in the prompted URL. The Options panel will appear at the left sidebar. First of all, you'll need to specify which of the images located in input_images/ is going to be processed. The model is fed with 128x128 px images, so select images that already have this kind of aspect ratio. Furthermore, the better ilumination, centered and visible the face is within the picture, the better results will the model output. Several images from the CelebA Dataset are provided to quicly see some results.
A Save button is also available to store the output image in the output_images/ folder.
This project is licensed under the MIT License - see the LICENSE.md file for details