/GraphWar

A graph adversarial learning toolbox based on PyTorch and DGL.

Primary LanguagePythonMIT LicenseMIT

⚔🛡 GraphWar: Arms Race in Graph Adversarial Learning

The robustness of graph neural networks (GNNs) against adversarial attacks has gained increasing attention in last few years. While there are numerous (heuristic) approaches aimed at robustifying GNNs, there is always a newly devised stronger attack attempts to break them, leading to an arms race between attackers and defenders. To this end, GraphWar aims to provide easy implementations with unified interfaces to facilitate the research in Graph Adversarial Learning.


NOTE: GraphWar is still in the early stages and the API will likely continue to change.

If you are interested in this project, don't hesitate to contact me or make a PR directly.

🚀 Installation

Please make sure you have installed PyTorch and Deep Graph Library (DGL).

# Comming soon
pip install -U graphwar

or

# Recommended now
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

⚡ Get Started

Assume that you have a dgl.DGLgraph instance g that describes your dataset. NOTE: Please make sure that g DO NOT contain selfloops, i.e., run g = g.remove_self_loop().

A simple targeted manipulation attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

A simple untargeted (non-targeted) manipulation attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

👀 Implementations

In detail, the following methods are currently implemented:

Attack

Manipulation Attack

Targeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly.
DICEAttack Waniek et al. 📝Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
Nettack Zügner et al. 📝Adversarial Attacks on Neural Networks for Graph Data, KDD'18
FGAttack (FGSM) Goodfellow et al. 📝Explaining and Harnessing Adversarial Examples, ICLR'15
Chen et al. 📝Fast Gradient Attack on Network Embedding, arXiv'18
Chen et al. 📝Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Dai et al. 📝Adversarial Attack on Graph Structured Data, ICML'18
GFAttack Chang et al. 📝A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20
IGAttack Wu et al. 📝Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SGAttack Li et al. 📝 Adversarial Attack on Large Scale Graph, TKDE'21

Untargeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly
DICEAttack Waniek et al. 📝Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
FGAttack (FGSM) Goodfellow et al. 📝Explaining and Harnessing Adversarial Examples, ICLR'15
Chen et al. 📝Fast Gradient Attack on Network Embedding, arXiv'18
Chen et al. 📝Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Dai et al. 📝Adversarial Attack on Graph Structured Data, ICML'18
Metattack Zügner et al. 📝Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19
PGD, MinmaxAttack Xu et al. 📝Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19

Injection Attack

Universal Attack

Backdoor Attack

Methods Venue
LGCBackdoor, FGBackdoor Chen et al. 📝Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22

Defense

Standard GNNs (without defense)

Methods Venue
GCN Kipf et al. 📝Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17
SGC Wu et al. 📝Simplifying Graph Convolutional Networks, ICLR'19
GAT Veličković et al. 📝Graph Attention Networks, ICLR'18
DAGNN Liu et al. 📝Towards Deeper Graph Neural Networks, KDD'20
APPNP Klicpera et al. 📝Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19
JKNet Xu et al. 📝Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18

Model-Level

Methods Venue
MedianGCN Chen et al. 📝Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21
RobustGCN Zhu et al. 📝Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19
ReliableGNN Geisler et al. 📝Reliable Graph Neural Networks via Robust Aggregation, NeurIPS'20
Geisler et al. 📝Robustness of Graph Neural Networks at Scale, NeurIPS'21
ElasticGNN Liu et al. 📝Elastic Graph Neural Networks, ICML'21
AirGNN Liu et al. 📝Graph Neural Networks with Adaptive Residual, NeurIPS'21
SimPGCN Jin et al. 📝Node Similarity Preserving Graph Convolutional Networks, WSDM'21

Data-Level

Methods Venue
JaccardPurification Wu et al. 📝Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SVDPurification Entezari et al. 📝All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, WSDM'20

More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.