Pinned Repositories
SRS-ME
Separable Diffusion Model Unlearning
MTDM
Temporal Knowledge Graph Reasoning Triggered by Memories
Image-Captioning-Attack
We focus our attention to to protect personal information contained in the images by generating adversarial examples to fool the image captioning system. Several attacks are evaluated on four models and two standard datasets.
Source-attack
Combine the source forensic and the adversarial attack. Give a resonable attack and defensive method for this case.
GEAA-for-data-protection
class-correlation-correction
Dlut-lab-zmn.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
Mengnan-Zhao
Config files for my GitHub profile.
Project-Traceability
SGG_Attack
Adversarial Attacks on Scene Graph Generation
Dlut-lab-zmn's Repositories
Dlut-lab-zmn/Dlut-lab-zmn.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
Dlut-lab-zmn/SRS-ME
Separable Diffusion Model Unlearning
Dlut-lab-zmn/SGG_Attack
Adversarial Attacks on Scene Graph Generation
Dlut-lab-zmn/class-correlation-correction
Dlut-lab-zmn/MTDM
Temporal Knowledge Graph Reasoning Triggered by Memories
Dlut-lab-zmn/GEAA-for-data-protection
Dlut-lab-zmn/Source-attack
Combine the source forensic and the adversarial attack. Give a resonable attack and defensive method for this case.
Dlut-lab-zmn/Mengnan-Zhao
Config files for my GitHub profile.
Dlut-lab-zmn/Image-Captioning-Attack
We focus our attention to to protect personal information contained in the images by generating adversarial examples to fool the image captioning system. Several attacks are evaluated on four models and two standard datasets.
Dlut-lab-zmn/Project-Traceability