Project Page | Paper | Data
- Create a conda enviroment:
conda env create -n desigen python=3.8
conda activate desigen
pip install -r requirements.txt
- Download Bastnet checkpoint into
saliency
folder.
-
Download meta data from here.
-
Download and process background images.
-
Preprocess saliency map for background images.
-
Move the corresponding background images to the right directory.
cd background/
# config accelerate first
accelerate config
# train the background generator
sh train.sh
More training settings can refer to Diffusers.
Generate background images with the prompts on validation set and evaluate them by the proposed metrics: FID, Salient Ratio and CLIP Score.
The pretrained saliency dectection mode can be download on Basnet and placed on salliency
directory.
sh test.sh
cd layout/
sh train.sh
Compute the layout given ground truth images and save them for further evaluation.
sh test.sh
Designs can be simply generated by the following command:
python pipeline.py \
--prompt "Rose Valentines' Day" \
--mode "background" \
--encoder_path /path/to/encoder \
--decoder_path /path/to/decoder \
--generator_path logs/background-ours
The mode
parameter can also be swiched to background
(background-only generation), design
(design generation) or iteration
(iterative refine).
A user-input attention reduction mask is also allowed by mask_image_path
.
Part of our code is borrowed from the following repositories:
@misc{weng2024desigen,
title={Desigen: A Pipeline for Controllable Design Template Generation},
author={Haohan Weng and Danqing Huang and Yu Qiao and Zheng Hu and Chin-Yew Lin and Tong Zhang and C. L. Philip Chen},
year={2024},
eprint={2403.09093},
archivePrefix={arXiv},
primaryClass={cs.CV}
}