/cross-image-attention

Officail Implementation for "Cross-Image Attention for Zero-Shot Appearance Transfer"

Primary LanguagePythonMIT LicenseMIT

Cross-Image Attention for Zero-Shot Appearance Transfer (SIGGRAPH 2024)

Yuval Alaluf*, Daniel Garibi*, Or Patashnik, Hadar Averbuch-Elor, Daniel Cohen-Or
Tel Aviv University
* Denotes equal contribution

Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images. In this work, we leverage this semantic knowledge to transfer the visual appearance between objects that share similar semantics but may differ significantly in shape. To achieve this, we build upon the self-attention layers of these generative models and introduce a cross-image attention mechanism that implicitly establishes semantic correspondences across images. Specifically, given a pair of images ––– one depicting the target structure and the other specifying the desired appearance ––– our cross-image attention combines the queries corresponding to the structure image with the keys and values of the appearance image. This operation, when applied during the denoising process, leverages the established semantic correspondences to generate an image combining the desired structure and appearance. In addition, to improve the output image quality, we harness three mechanisms that either manipulate the noisy latent codes or the model's internal representations throughout the denoising process. Importantly, our approach is zero-shot, requiring no optimization or training. Experiments show that our method is effective across a wide range of object categories and is robust to variations in shape, size, and viewpoint between the two input images.

Hugging Face Spaces


Given two images depicting a source structure and a target appearance, our method generates an image merging the structure of one image with the appearance of the other in a zero-shot manner.

Description

Official implementation of our Cross-Image Attention and Appearance Transfer paper.

Environment

Our code builds on the requirement of the diffusers library. To set up their environment, please run:

conda env create -f environment/environment.yaml
conda activate cross_image

Usage


Sample appearance transfer results obtained by our cross-image attention technique.

To generate an image, you can simply run the run.py script. For example,

python run.py \
--app_image_path /path/to/appearance/image.png \
--struct_image_path /path/to/structure/image.png \
--output_path /path/to/output/images.png \
--domain_name [domain the objects are taken from (e.g., animal, building)] \
--use_masked_adain True \
--contrast_strength 1.67 \
--swap_guidance_scale 3.5 \

Notes:

  • To perform the inversion, if no prompt is specified explicitly, we will use the prompt "A photo of a [domain_name]"
  • If --use_masked_adain is set to True (its default value), then --domain_name must be given in order to compute the masks using the self-segmentation technique.
    • In cases where the domains are not well-defined, you can also set --use_masked_adain to False and no domain_name is required.
  • You can set --load_latents to True to load the latents from a file instead of inverting the input images every time.
    • This is useful if you want to generate multiple images with the same structure but different appearances.

Demo Notebook


Additional appearance transfer results obtained by our cross-image attention technique.

We also provide a notebook to run in Google Colab, please see notebooks/demo.ipynb.

HuggingFaceDemo 🤗

We also provide a simple HuggingFace demo to run our method on your own images.
Check it out here!

Acknowledgements

This code builds on the code from the diffusers library. In addition, we borrow code from the following repositories:

Citation

If you use this code for your research, please cite the following work:

@misc{alaluf2023crossimage,
      title={Cross-Image Attention for Zero-Shot Appearance Transfer}, 
      author={Yuval Alaluf and Daniel Garibi and Or Patashnik and Hadar Averbuch-Elor and Daniel Cohen-Or},
      year={2023},
      eprint={2311.03335},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}