/ARWGAN

This repository is the official PyTorch implementation of ARWGAN: attention-guided robust image watermarking model based on GAN.

Primary LanguagePythonMIT LicenseMIT

ARWGAN

This repository is the official PyTorch implementation of ARWGAN: attention-guided robust image watermarking model based on GAN.

Pretrain

The pre-trained model of ARWGAN is avaliable.

Train

If you need to train ARWGAN from scratch, you should use commond line as following.

  python mian.py new -n name -d data-dir -b batch-size -e epochs  -n noise

Environmental requirements:

  • Python == 3.7.4; Torch == 1.12.1 + cu102; Torchvision == 0.13.1; PIL == 7.2.0

Test

Put the pre-trained model into pretrain floder, and you can test ARWGAN by command line as following.

  python test.py -o ./pretrain/options-and-config.pickle -c ./pretrain/checkpoints/ARWGAN.pyt -s/mnt/chengxin/Datasets/DUTS/DUTS-TE/Std-Image-30/ -n 'Jpeg(10.0)'

Citation

@ARTICLE{10155247,
  author={Huang, Jiangtao and Luo, Ting and Li, Li and Yang, Gaobo and Xu, Haiyong and Chang, Chin-Chen},
  journal={IEEE Transactions on Instrumentation and Measurement}, 
  title={ARWGAN: Attention-Guided Robust Image Watermarking Model Based on GAN}, 
  year={2023},
  volume={72},
  number={},
  pages={1-17},
  doi={10.1109/TIM.2023.3285981}}

Acknowledgement

The codes are designed based on HiDDeN.