/FashionTex

The official implementation of SIGGRAPH 2023 conference paper, FashionTex: Controllable Virtual Try-on with Text and Texture.

Primary LanguagePythonMIT LicenseMIT

FashionTex

The official implementation of SIGGRAPH 2023 conference paper, FashionTex: Controllable Virtual Try-on with Text and Texture. (https://arxiv.org/abs/2305.04451)

TODO:

  • Training Code
  • Data Processing Script
  • Test Code
  • ID Recovery Module

Requirement

  1. Create a conda virtual environment and activate it:
conda create -n fashiontex python=3.8
conda activate fashiontex
  1. Install required packages:
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install ftfy regex tqdm gdown
pip install git+https://github.com/openai/CLIP.git
  1. Install required packages for DenseCLIP.
  2. Download Pretrained StyleGAN-Human weight(stylegan_human_v2_1024.pkl) from https://github.com/stylegan-human/StyleGAN-Human
  3. Download Pretrained IR-SE50 model taken from TreB1eN for use in our ID loss during training.
  4. Download Pretrained DenseCLIP weight.

Default path for pretrained weights is ./pretrained. You can change the path in mapper/options/train_options.py

Prepare data

In this project, we use DeepFashion-MultiModal dataset. We use e4e to invert images into latent space.

  1. Download DeepFashion-MultiModal dataset.
  2. In order to use the pre-trained StyleGAN-Human model, we should align images with Aligned raw images. Put the aligned images in data/data_split/aligned.
  3. Invert aligned images: The simplest way is to follow Invert real image with PTI and we only need the output embedding "0.pt" in 'outputs/pti/'. (Since we only need the output of e4e, you can comment out the finetuning code to save time.)
  4. Run the data processing script:
bash data/process.sh

Training

You can set the GPU number in run.sh. If you would like to change the data, weights, output path or other settings, you can find them in mapper/options/train_options.py.

bash run.sh

Acknowledgements

This code is based on StyleCLIP and HairCLIP