Dlut-lab-zmn/Image-Captioning-Attack
We focus our attention to to protect personal information contained in the images by generating adversarial examples to fool the image captioning system. Several attacks are evaluated on four models and two standard datasets.
Python