/Security-and-Privacy-in-Machine-Learning

Implementations on Security and Privacy in ML; Evasion Attack, Model Stealing, Model Poisoning, Membership Inference Attacks, ...

Primary LanguageJupyter Notebook

Security and Privacy in Machine Learning

Adversarial Malware Generator

MalwareDetector.py detects malware by CNN on bytes, Adversarial_Malware_Generator.py generates an adversarial malware by appending some bytes at the end and perturbating them. Malware_DoNotExecute.exe is a malware to create adversarial example from.

Evasion Attack and Defense

Working around targeted and non-targeted Random noise attack, FGSM Explaining and Harnessing Adversarial Examples and PGD Towards Deep Learning Models Resistant to Adversarial Attacks and measuring their success rate against FGSM/PGD adversarial traning.

JBDA Model Stealing and Obfuscated Gradients

Jacobian-based Dataset Augmentation from Practical Black-Box Attacks against Machine Learning to approximate a surrogate model to use its gradients for atatck on target model which has obfuscated gradients defense machanism. Black-box attacks peforms better than white-box as already said in Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Membership Inference Attack

A quick touch on Membership Inference Attacks Against Machine Learning Models, good inefrence rate was possible with only two shadow models on CIFAR10.

Poisoning Attack

Poisoning Attacks based on Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks.