- Create conda environment (
environment.yaml
) - Download DIR-D dataset, pseudo mesh and checkpoints from HuggingFace
- Extract dataset. Now your root directory should contains:
/CDM
,/MDM
,/DIR-D
,/Checkpoints
- Run "cd MDM && python sample.py" to generate MDM intermediate result
- Run "cd CDM && python sample.py" to generate final result
- Run "python metric.py" to calculate metrics
- A lower version of
pytorch-lightning
is needed. Install environment fromenvironment-training.yaml
. (Tested usingmicromamba
, if installing by conda is failed, consider manually install all packages in this file.) - Train MDM first:
cd MDM && accelerate launch train_512_atten.py
. You may want to modify this file to change batch size, etc. Please refer toaccelerate
's documents for more information. - When training is completed, modify
MDM/sample.py
. Specifically, replacetesting
withtraining
and change the path to your checkpoint. - Train CDM:
cd CDM && python main.py fit -b configs/rectangling.yaml
@inproceedings{zhou2024recdiffusion,
title={RecDiffusion: Rectangling for Image Stitching with Diffusion Models},
author={Zhou, Tianhao and Li, Haipeng and Wang, Ziyi and Luo, Ao and Zhang, Chen-Lin and Li, Jiajun and Zeng, Bing and Liu, Shuaicheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2692--2701},
year={2024}
}