/pixel-guided-diffusion

Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models

Primary LanguagePythonApache License 2.0Apache-2.0

Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models

arXiv
Official implementation of the paper Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models accepted by AI for Content Creation workshop at CVPR2023

This code is implemented in nnabla and pytorch.
Please see each repository for setup and demonstration, respectively.

NOTE: If you want to clone a submodule as well, clone it with --recursive.

Overview

Our approach with pixel-wise gradient-based guidance with diffusion models enables fine-grained image editing. An existing GAN-based method fails to recontruct detailed features. In contrust, our diffusion-based method enables pixel-wise editing while preserving detailed features.

References

@article{matsunaga2022fine,
  title={Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models},
  author={Matsunaga, Naoki and Ishii, Masato and Hayakawa, Akio and Suzuki, Kenji and Narihira, Takuya},
  journal={AI for Content Creation workshop at CVPR2023},
  year={2022}
}