This repository contains the code for EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs, published in AAAI 2020.
7 datasets were used in the paper:
- stochastic block model: See the 'data' folder. Untar the file for use.
- bitcoin OTC: Downloadable from http://snap.stanford.edu/data/soc-sign-bitcoin-otc.html
- bitcoin Alpha: Downloadable from http://snap.stanford.edu/data/soc-sign-bitcoin-alpha.html
- uc_irvine: Downloadable from http://konect.uni-koblenz.de/networks/opsahl-ucsocial
- autonomous systems: Downloadable from http://snap.stanford.edu/data/as-733.html
- reddit hyperlink network: Downloadable from http://snap.stanford.edu/data/soc-RedditHyperlinks.html
- elliptic: A preprocessed version of https://www.kaggle.com/ellipticco/elliptic-data-set is provided in the following link:
https://ibm.box.com/s/j04m8lwoqktjixke2gj7lgllrvvdidme.Untar the file in the 'data' folder for use.
Update on elliptic: The box link is no longer valid. Please see the instruction to manually prepare the preprocessed version.
For downloaded data sets please place them in the 'data' folder.
- PyTorch 1.0 or higher
- Python 3.6
This docker file describes a container that allows you to run the experiments on any Unix-based machine. GPU availability is recommended to train the models. Otherwise, set the use_cuda flag in parameters.yaml to false.
From this folder you can create the image
sudo docker build -t gcn_env:latest docker-set-up/
Start the container
sudo docker run -ti --gpus all -v $(pwd):/evolveGCN gcn_env:latest
This will start a bash session in the container.
Set --config_file with a yaml configuration file to run the experiments. For example:
python run_exp.py --config_file ./experiments/parameters_example.yaml
Most of the parameters in the yaml configuration file are self-explanatory. For hyperparameters tuning, it is possible to set a certain parameter to 'None' and then set a min and max value. Then, each run will pick a random value within the boundaries (for example: 'learning_rate', 'learning_rate_min' and 'learning_rate_max'). The 'experiments' folder contains one file for each result reported in the EvolveGCN paper.
Setting 'use_logfile' to True in the configuration yaml will output a file, in the 'log' directory, containing information about the experiment and validation metrics for the various epochs. The file could be manually analyzed, alternatively 'log_analyzer.py' can be used to automatically parse a log file and to retrieve the evaluation metrics at the best validation epoch. For example:
python log_analyzer.py log/filename.log
[1] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. AAAI 2020.
Please cite the paper if you use this code in your work:
@INPROCEEDINGS{egcn,
AUTHOR = {Aldo Pareja and Giacomo Domeniconi and Jie Chen and Tengfei Ma and Toyotaro Suzumura and Hiroki Kanezashi and Tim Kaler and Tao B. Schardl and Charles E. Leiserson},
TITLE = {{EvolveGCN}: Evolving Graph Convolutional Networks for Dynamic Graphs},
BOOKTITLE = {Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence},
YEAR = {2020},
}