/SDEdit

PyTorch implementation for SDEdit: Image Synthesis and Editing with Stochastic Differential Equations

Primary LanguagePythonMIT LicenseMIT

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations


Project | Paper | Colab

PyTorch implementation of SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations (ICLR 2022).

Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon

Stanford and CMU

Recently, SDEdit has also been applied to text-guided image editing with large-scale text-to-image models. Notable examples include Stable Diffusion's img2img function (see here), GLIDE, and distilled-SD. The below example comes from distilled-SD.

Overview

The key intuition of SDEdit is to "hijack" the reverse stochastic process of SDE-based generative models, as illustrated in the figure below. Given an input image for editing, such as a stroke painting or an image with color strokes, we can add a suitable amount of noise to make its artifacts undetectable, while still preserving the overall structure of the image. We then initialize the reverse SDE with this noisy input, and simulate the reverse process to obtain a denoised image of high quality. The final output is realistic while resembling the overall image structure of the input.

Getting Started

The code will automatically download pretrained SDE (VP) PyTorch models on CelebA-HQ, LSUN bedroom, and LSUN church outdoor.

Data format

We save the image and the corresponding mask in an array format [image, mask], where "image" is the image with range [0,1] in the PyTorch tensor format, "mask" is the corresponding binary mask (also the PyTorch tensor format) specifying the editing region. We provide a few examples, and functions/process_data.py will automatically download the examples to the colab_demo folder.

Re-training the model

Here is the PyTorch implementation for training the model.

Stroke-based image generation

Given an input stroke painting, our goal is to generate a realistic image that shares the same structure as the input painting. SDEdit can synthesize multiple diverse outputs for each input on LSUN bedroom, LSUN church and CelebA-HQ datasets.

To generate results on LSUN datasets, please run

python main.py --exp ./runs/ --config bedroom.yml --sample -i images --npy_name lsun_bedroom1 --sample_step 3 --t 500  --ni
python main.py --exp ./runs/ --config church.yml --sample -i images --npy_name lsun_church --sample_step 3 --t 500  --ni

Stroke-based image editing

Given an input image with user strokes, we want to manipulate a natural input image based on the user's edit. SDEdit can generate image edits that are both realistic and faithful (to the user edit), while avoid introducing undesired changes.

To perform stroke-based image editing, run
python main.py --exp ./runs/  --config church.yml --sample -i images --npy_name lsun_edit --sample_step 3 --t 500  --ni

Additional results

References

If you find this repository useful for your research, please cite the following work.

@inproceedings{
      meng2022sdedit,
      title={{SDE}dit: Guided Image Synthesis and Editing with Stochastic Differential Equations},
      author={Chenlin Meng and Yutong He and Yang Song and Jiaming Song and Jiajun Wu and Jun-Yan Zhu and Stefano Ermon},
      booktitle={International Conference on Learning Representations},
      year={2022},
}

This implementation is based on / inspired by:

Here are also some of the interesting follow-up works of SDEdit: