/SSRIW

Tensorflow implementation of 'Robust Image Watermarking based on Cross-Attention and Invariant Domain Learning'

Primary LanguageJupyter Notebook

Robust Image Watermarking based on Cross-Attention and Invariant Domain Learning


Explanation video

Link to pretrained weights

Cross-attention for image watermarking:

We propose utilizing MHA in image watermarking to allocate watermarks across various regions based on relevance. The figure illustrates the decomposition of the cover image and watermark into vectors which are then embedded with positional embeddings. These vectors are then processed through an MHA layer, computing attention scores between patches from one image (as queries) and the other (as keys), facilitating understanding between the cover image and the watermark. These scores are then utilized to identify optimal watermark embedding locations. cross_attention_watermarking

Architecture overview:

Architecture

Watermark generation example:

(Left) A sample 128x128x3 cover image from our subset of the imagenet validation set. (Right) The watermark generated by resizing, isolating the first channel and binarizing the pixels which range from 0 to 255, to 0 or 1, based on the threshold 128 (half of 255). watermark_generation

Watermark embedding location:

Figure illustrating the pixels affected by the embedding process for each case by showcasing the difference between the cover image and their respective marked image. watermark_location

Noise tolerance study:

We perform an experiment to test the tolerance of our proposed scheme against increasing levels of noises. As expected the performance decreases steadily with an increase in the degree of noise. noise_tolerance

Paper Citation:

Dasgupta, A. and Zhong, X., 2023. Robust Image Watermarking based on Cross-Attention and Invariant Domain Learning. arXiv preprint arXiv:2310.05395.

https://arxiv.org/abs/2310.05395