/PanoDiffusion

Primary LanguagePythonOtherNOASSERTION

PanoDiffusion: 360-degree Panorama Outpainting via Diffusion

Tianhao Wu · Chuanxia Zheng · Tat-Jen Cham

ICLR 2024

Logo

Setup

Installation

This code has been tested using python 3.8.5 with torch 1.7.0 & CUDA 11.0 on a V100. You need to first download the code and our pretrained model. It should include checkpoints for RGB/Depth VQ model, LDM and RefineNet model.

git clone https://github.com/PanoDiffusion/PanoDiffusion.git
cd PanoDiffusion
conda env create -f environment.yml

Play with PanoDiffusion

We have already prepared some images and masks under 'example' folder. To test the model, you can simply run:

python inference.py \
--indir PanoDiffusion/example \
--outdir PanoDiffusion/example/output \
--ckpt PanoDiffusion/pretrain_model/ldm/ldm.ckpt \
--config PanoDiffusion/config/outpainting.yaml \
--refinenet_ckpt PanoDiffusion/pretrain_model/refinenet/refinenet.pth.tar

or 

bash inference.sh

The results will be saved in the 'output' folder. Each time you run the code you will get a new outpainting result.

Citation

If you find our code or paper useful, please cite our work.

@inproceedings{wu2023panodiffusion,
  title={PanoDiffusion: 360-degree Panorama Outpainting via Diffusion},
  author={Wu, Tianhao and Zheng, Chuanxia and Cham, Tat-Jen},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2023}
}