Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN"
Create a virtual environment:
virtualenv pasta --python=3.7
source pasta/bin/activate
Install required packages:
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
pip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3
pip install psutil scipy matplotlib opencv-python scikit-image==0.18.3 pycocotools
apt install libgl1-mesa-glx
Since the copyright of the UPT dataset belongs to the E-commerce website Zalando and Zalora, we only release the image links in this link. For more details about the dataset and the crawling scripts, please send email to xiezhy6@mail2.sysu.edu.cn.
After downloading the raw RGB image, we run the pose estimator Openpose and human parser Graphonomy for each image to obtain the 18-points human keypoints and the 19-labels huamn parsing, respectively.
The dataset structure is recommended as:
+—UPT_256_192
| +—UPT_subset1_256_192
| +-image
| +- e.g. image1.jpg
| +- ...
| +-keypoints
| +- e.g. image1_keypoints.json
| +- ...
| +-parsing
| +- e.g. image1.png
| +- ...
| +-train_pairs_front_list_0508.txt
| +-test_pairs_front_list_shuffle_0508.txt
| +—UPT_subset2_256_192
| +-image
| +- ...
| +-keypoints
| +- ...
| +-parsing
| +- ...
| +-train_pairs_front_list_0508.txt
| +-test_pairs_front_list_shuffle_0508.txt
| +— ...
By using the raw RGB image, huamn keypoints, and human parsing, we can run the training script and the testing script.
We provide the pre-trained models of PASTA-GAN which are trained by using the full UPT dataset (i.e., our newly collected data, data from Deepfashion dataset, data from MPV dataset) with the resolution of 256 and 512 separately.
we provide a simple script to test the pre-trained model provided above on the UPT dataset as follow:
CUDA_VISIBLE_DEVICES=0 python3 -W ignore test.py \
--network /datazy/Codes/PASTA-GAN/PASTA-GAN_fullbody_model/network-snapshot-004000.pkl \
--outdir /datazy/Datasets/pasta-gan_results/unpaired_results_fulltryonds \
--dataroot /datazy/Datasets/PASTA_UPT_256 \
--batchsize 16
or you can run the bash script by using the following command:
bash test.sh 1
To test with higher resolution pretrained model (512x320), you can run the bash script by using the following command:
bash test.sh 2
Note that, in the testing script, the parameter --network
refers to the path of the pre-trained model, the parameter --outdir
refers to the path of the directory for generated results, the parameter --dataroot
refers to the path of the data root. Before running the testing script, please make sure these parameters refer to the correct locations.
- Download the UPT_256_192 training set.
- Download the VGG model from VGG_model, then put "vgg19_conv.pth" and "vgg19-dcbb9e9d" under the directory "checkpoints".
- Run
bash train.sh 1
.
- Release the the pretrained model (256x192) and the inference script.
- Release the training script.
- Release the pretrained model (512x320).
- Release the training script for model (512x320).
The use of this code is RESTRICTED to non-commercial research and educational purposes.