/AIJack

Security and Privacy Risk Simulator for Machine Learning

Primary LanguageC++Apache License 2.0Apache-2.0

AIJack: Security and Privacy Risk Simulator for Machine Learning

❤️ If you like AIJack, please consider becoming a GitHub Sponsor ❤️




What is AIJack?

AIJack is an easy-to-use open-source simulation tool for testing the security of your AI system against hijackers. It provides advanced security techniques like Differential Privacy, Homomorphic Encryption, K-anonymity and Federated Learning to guarantee protection for your AI. With AIJack, you can test and simulate defenses against various attacks such as Poisoning, Model Inversion, Backdoor, and Free-Rider. We support more than 30 state-of-the-art methods. For more information, check our documentation and start securing your AI today with AIJack.

Installation

You can install AIJack with pip. AIJack requires Boost and pybind11.

apt install -y libboost-all-dev
pip install -U pip
pip install "pybind11[global]"

pip install aijack

If you want to use the latest-version, you can directly install from GitHub.

pip install git+https://github.com/Koukyosyumei/AIJack

We also provide Dockerfile.

Quick Start

We briefly introduce the overview of AIJack.

Features

  • All-around abilities for both attack & defense
  • PyTorch-friendly design
  • Compatible with scikit-learn
  • Fast Implementation with C++ backend
  • MPI-Backend for Federated Learning
  • Extensible modular APIs

Basic Interface

For standard machine learning algorithms, AIJack allows you to simulate attacks against machine learning models with Attacker APIs. AIJack mainly supports PyTorch or sklearn models.

# abstract code

attacker = Attacker(target_model)
result = attacker.attack()

For distributed learning such as Federated Learning and Split Learning, AIJack offers four basic APIs: Client, Server, API, and Manager. Client and Server represent each client and server within each distributed learning scheme. You can execute training by registering the clients and servers to API and running it. Manager gives additional abilities such as attack, defense, or parallel computing to Client, Server or API via attach method.

# abstract code

client = [Client(), Client()]
server = Server()
api = API(client, server)
api.run() # execute training

c_manager = ClientManagerForAdditionalAbility(...)
s_manager = ServerManagerForAdditionalAbility(...)
ExtendedClient = c_manager.attach(Client)
ExtendedServer = c_manager.attach(Server)

extended_client = [ExtendedClient(...), ExtendedClient(...)]
extended_server = ExtendedServer(...)
api = API(extended_client, extended_server)
api.run() # execute training

For example, the bellow code implements the scenario where the server in Federated Learning tries to steal the training data with gradient-based model inversion attack.

from aijack.collaborative.fedavg import FedAVGAPI, FedAVGClient, FedAVGServer
from aijack.attack.inversion import GradientInversionAttackServerManager

manager = GradientInversionAttackServerManager(input_shape)
FedAVGServerAttacker = manager.attach(FedAVGServer)

clients = [FedAVGClient(model_1), FedAVGClient(model_2)]
server = FedAVGServerAttacker(clients, model_3)

api = FedAVGAPI(server, clients, criterion, optimizers, dataloaders)
api.run()

Resources

You can also find more examples in our tutorials and documentation.

Supported Algorithms

Collaborative Horizontal FL FedAVG, FedProx, FedKD, FedGEMS, FedMD, DSFL
Collaborative Vertical FL SplitNN, SecureBoost
Attack Model Inversion MI-FACE, DLG, iDLG, GS, CPL, GradInversion, GAN Attack
Attack Label Leakage Norm Attack
Attack Poisoning History Attack, Label Flip, MAPF, SVM Poisoning
Attack Backdoor DBA
Attack Free-Rider Delta-Weight
Attack Evasion Gradient-Descent Attack
Attack Membership Inference Shaddow Attack
Defense Homomorphic Encryption Paiilier, CKKS
Defense Differential Privacy DPSGD, AdaDPS
Defense Anonymization Mondrian
Defense Others Soteria, FoolsGold, MID, Sparse Gradient

Contact

welcome2aijack[@]gmail.com