This repository is the official implementation of TPD
Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On
Xu Yang, Changxing Ding, Zhibin Hong, Junhao Huang, Jin Tao, Xiangmin Xu
- Release inference code
- Release model weights
- Release training code
- Release evaluation code
conda env create -f environment.yml
conda activate TPD
Download the pretrained checkpoint and save it in the checkpoints folder like:
checkpoints
|-- release
|-- TPD_240epochs.ckpt
Download the VITON-HD dataset from here.
You should copy the test folder for validation and the dataset structure should be like:
datasets/VITONHD/
test | train | validation(copied from test)
|-- agnostic-mask
|-- agnostic-v3.2
|-- cloth
|-- cloth_mask
|-- image
|-- image-densepose
|-- image-parse-agnostic-v3.2
|-- image-parse-v3
|-- openpose_img
|-- openpose_json
Refer to commands/inference.sh
We utilize the pretrained Paint-by-Example as initialization, and increase it's first conv-layer from 9 to 18 channels (zero initiated). Please download the pretrained model first and save it in the checkpoints folder. Then run utils/rm_clip_and_add_channels.py to add input channels of the first conv-layer and remove CLIP module. The final checkpoints folder structure is like:
checkpoints
|-- original
|-- model.ckpt
|-- mode_prepared.ckpt
Refer to commands/train.sh
LPIPS: https://github.com/richzhang/PerceptualSimilarity
FID: https://github.com/mseitzer/pytorch-fid
Run utils/generate_GT.py to generate GT images with 384*512 resolution
Refer to calculate_metrics/calculate_metrics.sh
Thanks to Paint-by-Example, our code is heavily borrowed from it.
@misc{yang2024texturepreserving,
title={Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On},
author={Xu Yang and Changxing Ding and Zhibin Hong and Junhao Huang and Jin Tao and Xiangmin Xu},
year={2024},
eprint={2404.01089},
archivePrefix={arXiv},
primaryClass={cs.CV}
}