👕 ICCV'21 Paper | 👖 Project Page | 👚 arXiv | 🎽 Video Talk | 👗 Running This Code
The official implementation of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing." by Aiyu Cui, Daniel McKee and Svetlana Lazebnik. (ICCV 2021)
🔔 Updates
- [2021/08] Please check our latest version of paper for the updated and clarified implementation details.
- Clarification: the facial component was not added to the skin encoding as stated in the our CVPR 2021 workshop paper due to a minor typo. However, this doesn't affect our conclusions nor the comparison with the prior work, because it is an independent skin encoding design.
- [2021/07] To appear in ICCV 2021.
- [2021/06] The best paper at Computer Vision for Fashion, Art and Design Workshop CVPR 2021.
Supported Try-on Applications
Supported Editing Applications
More results
Play with demo.ipynb!
Please follow the installation instruction in GFLA to install the environment.
Then run
pip install -r requirements.txt
If one wants to run inference only:
You can use later version of PyTorch and you don't need to worry about how to install GFLA's cuda functions. Please specify --frozen_flownet
.
We run experiments on Deepfashion Dataset. To set up the dataset:
- Download and unzip
img_highres.zip
from the deepfashion inshop dataset at$DATA_ROOT
- Download the train/val split and pre-processed keypoints annotations from
GFLA source
or PATN source,
and put the
.csv
and.lst
files at$DATA_ROOT
. - Run
python tools/generate_fashion_dataset.py --dataroot $DATAROOT
to split the data. - Get human parsing. You can obtain the parsing by either:
- Download standard_test_anns.txt for fast visualization.
After the processing, you should have the dataset folder formatted like:
+ $DATA_ROOT
| + train (all training images)
| | - xxx.jpg
| | ...
| + trainM_lip (human parse of all training images)
| | - xxx.png
| | ...
| + test (all test images)
| | - xxx.jpg
| | ...
| + testM_lip (human parse of all test images)
| | - xxx.png
| | ...
| - fashion-pairs-train.csv (paired poses for training)
| - fashion-pairs-test.csv (paired poses for test)
| - fashion-annotation-train.csv (keypoints for training images)
| - fashion-annotation-test.csv (keypoints for test images)
| - train.lst
| - test.lst
| - standard_test_anns.txt
Please download the pretrained weights from here and unzip at checkpoints/
.
After downloading the pretrained model and setting the data, you can try out our applications in notebook demo.ipynb.
(The checkpoints above are reproduced, so there could be slightly difference in quantitative evaluation from the reported results. To get the original results, please check our released generated images here.)
(DIORv1_64
was trained with a minor difference in code, but it may give better visual results in some applications. If one wants to try it, specify --netG diorv1
.)
Warmup the Global Flow Field Estimator
Note, if you don't want to warmup the Global Flow Field Estimator, you can extract its weights from GFLA by downloading the pretrained weights GFLA from here.
Otherwise, run
sh scripts/run_pose.sh
Training
After warming up the flownet, train the pipeline by
sh scripts/run_train.sh
Run tensorboard --logdir checkpoints/$EXP_NAME/train
to check tensorboard.
Resetting discriminators may help training when it stucks at local minimals.
To download our generated images (256x176 reported in paper): here.
SSIM, FID and LPIPS
To run evaluation (SSIM, FID and LPIPS) on pose transfer task:
sh scripts/run_eval.sh
If you find this work is helpful, please consider to star 🌟 this repo and cite us as
@InProceedings{Cui_2021_ICCV,
author = {Cui, Aiyu and McKee, Daniel and Lazebnik, Svetlana},
title = {Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-On and Outfit Editing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {14638-14647}
}
This repository is built up on GFLA, pytorch-CycleGAN-and-pix2pix, PATN and MUNIT. Please be aware of their licenses when using the code.
Thanks a lot for the great work to the pioneer researchers!