This codebase has been used to generate all results found in the ICLR 2023 paper Revisiting Robustness in Graph Machine Learning.
The following instructions refer to the codebase found in the src folder and have been used to generate all results pertaining random graph models (contextual stochastic block models / contextual Barabási-Albert models with community structure).
The codebase for the experiments on real graphs with a separate installation and usage description can be found in the folder real_graphs.
The code requires the following packages and has been tested with the given versions:
python 3.9.7
pytorch 1.10.2
cudatoolkit 11.3.1
pyg 2.0.3
sacred 0.8.2
tqdm 4.62.3
scipy 1.7.3
torchtyping 0.1.4
seml 0.3.6
jupyterlab 3.2.9
numba 0.54.1
pytest 7.0.0 (optional: only for performing unit tests)
All experiments have been conducted using seml. For an introduction into seml, we refer to the official examples on the seml github repository. Individually configured experiments can also be run without using the seml command-line interface and without requiring a MongoDB using the provided exp.ipynb.
The code in exp_eval_robustness.py
trains a models with the provided hyperparameters and then, analyzes its classic as well as semantic-aware robustness. The corresponding seml experiment configuration files can be found in config/eval_robustness.
Exemplary, the corresponding configuration file for the GCN+LP architecture is csbm_gcn_lp.yaml
. It can be run by executing
seml [mongodb-collection-name] add config/eval_robustness/csbm_gcn_lp.yaml start
Optionally, the experiments can be run locally by adding the --local
flag.
seml [mongodb-collection-name] add config/eval_robustness/csbm_gcn_lp.yaml start --local
As mentioned above, individually configured experiments can be run without using seml and a MongoDB as shown in exp.ipynb (note: the only requirement is that the seml package is installed).
The experiment code to do a hyperparameter search for a given model is collected in exp_train.py
. The corresponding seml experiment configuration files can be found in config/training.
Exemplary, the corresponding configuration file for the GCN architecture is csbm_gcn.yaml
. Training the GCNs can then be performed by executing:
seml [mongodb-collection-name] add config/training/csbm_gcn.yaml start
again, optionally, the experiments can be run locally by adding the --local
flag.
Please cite our paper if you use this code in your own work:
@inproceedings{
gosch2023revisiting,
title={Revisiting Robustness in Graph Machine Learning},
author={Lukas Gosch and Daniel Sturm and Simon Geisler and Stephan G{\"u}nnemann},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
}
For questions and feedback, please do not hesitate to contact:
Lukas Gosch, lukas (dot) gosch (at) tum (dot) de, Technical University of Munich
This codebase contains code snippets from the following repositories:
- Adversarial Attacks on Neural Networks for Graph Data
- Robustness of Graph Neural Networks at Scale
- PyTorch Geometric
We thank the authors for making their code public and the development team of PyTorch Geometric as well as seml.