Implementation of Relational Graph Attention operator for heterogeneous graphs in PyTorch
PythonMIT
Relational Graph Attention from Scratch
This repository provides Relational (heterogeneous) Graph Attention (RGAT) operator implementation from scratch. This implementation is, as the name suggests, meant only for relational (simple/property/attributed) graphs. Here, two schemes have been implemented to compute attention logits $\mathbf{a}^{(r)}_{i,j}$ for each relation type $r \in \mathcal{R}$:-
Here, $\mathbf{Q}^{(r)} \in \mathbb{R}^{F’ \times D}$ is the query kernel, $\mathbf{K}^{(r)} \in \mathbb{R}^{F’ \times D}$ is the key kernel, and $g^{(r)}_i$ is the intermediate relation type-based representations. Moreover, $F^\prime$ is the new feature dimensionality and D is the output dimension size.
Two different attention mechanisms have also been provided:-
where $|\mathcal{N}_r(i)|$ represents the cardinality of the neighborhood of $i^{th}$ node having relation type $r$ and $\mathcal{W} \in \mathbb{R}^{N \times N}$ is a non-zero matrix with dimensionality N (number of nodes).
More in-depth information about this implementation is available on PyTorch Geometric Official Website.
Requirements
PyTorch
PyTorch Geometric
Usage
Data
Though the example.py file contains the path to one of the relational entities graphs (AIFB), this implementation works for other heterogeneous graph datasets such as MUTAG, BGS, AM, etc. The AIFB dataset contains no. of nodes (8285), edges (58086), and classes (4).
Training and Testing
The layer implementation can be seen inside rgat_conv.py.
To train and test RGATs on heterogeneous graphs, run example.py, and this file, after every epoch, prints train as well test accuracies.