A set of layers for graph convolutions in TensorFlow Keras that use RaggedTensors.
General | Requirements | Installation | Documentation | Implementation details | Literature | Data | Datasets | Training | Issues | Citing | References
The package in kgcnn contains several layer classes to build up graph convolution models. Some models are given as an example. A documentation is generated in docs. Focus of kgcnn is (batched) graph learning for molecules kgcnn.molecule and materials kgcnn.crystal. If you want to get in contact, feel free to discuss.
Standard python package requirements are placed in the setup.py
and are installed automatically (kgcnn >=2.2).
Packages which must be installed manually for full functionality:
- openbabel >=3.0.1
Clone repository or latest release and install with editable mode:
pip install -e ./gcnn_keras
or latest release via Python Package Index.
pip install kgcnn
Auto-documentation is generated at https://kgcnn.readthedocs.io/en/latest/index.html .
The most frequent usage for graph convolutions is either node or graph classification. As for their size, either a single large graph, e.g. citation network or small (batched) graphs like molecules have to be considered. Graphs can be represented by an index list of connections plus feature information. Typical quantities in tensor format to describe a graph are listed below.
nodes
: Node-list of shape(batch, [N], F)
whereN
is the number of nodes andF
is the node feature dimension.edges
: Edge-list of shape(batch, [M], F)
whereM
is the number of edges andF
is the edge feature dimension.indices
: Connection-list of shape(batch, [M], 2)
whereM
is the number of edges. The indices denote a connection of incoming or receiving nodei
and outgoing or sending nodej
as(i, j)
.state
: Graph state information of shape(batch, F)
whereF
denotes the feature dimension.
A major issue for graphs is their flexible size and shape, when using mini-batches. Here, for a graph implementation in the spirit of keras, the batch dimension should be kept also in between layers. This is realized by using RaggedTensor
s.
Graph tensors for edge-indices or attributes for multiple graphs is passed to the model in form of ragged tensors
of shape (batch, None, Dim)
where Dim
denotes a fixed feature or index dimension.
Such a ragged tensor has ragged_rank=1
with one ragged dimension indicated by None
and is build from a value plus partition tensor.
For example, the graph structure is represented by an index-list of shape (batch, None, 2)
with index of incoming or receiving node i
and outgoing or sending node j
as (i, j)
.
Note, an additional edge with (j, i)
is required for undirected graphs.
A ragged constant can be easily created and passed to a model:
import tensorflow as tf
import numpy as np
idx = [[[0, 1], [1, 0]], [[0, 1], [1, 2], [2, 0]], [[0, 0]]] # batch_size=3
# Get ragged tensor of shape (3, None, 2)
print(tf.ragged.constant(idx, ragged_rank=1, inner_shape=(2, )).shape)
print(tf.RaggedTensor.from_row_lengths(np.concatenate(idx), [len(i) for i in idx]).shape)
Models can be set up in a functional way. Example message passing from fundamental operations:
import tensorflow as tf
from kgcnn.layers.gather import GatherNodes
from kgcnn.layers.modules import Dense, LazyConcatenate # ragged support
from kgcnn.layers.pooling import PoolingLocalMessages, PoolingNodes
ks = tf.keras
n = ks.layers.Input(shape=(None, 3), name='node_input', dtype="float32", ragged=True)
ei = ks.layers.Input(shape=(None, 2), name='edge_index_input', dtype="int64", ragged=True)
n_in_out = GatherNodes()([n, ei])
node_messages = Dense(10, activation='relu')(n_in_out)
node_updates = PoolingLocalMessages()([n, node_messages, ei])
n_node_updates = LazyConcatenate(axis=-1)([n, node_updates])
n_embedding = Dense(1)(n_node_updates)
g_embedding = PoolingNodes()(n_embedding)
message_passing = ks.models.Model(inputs=[n, ei], outputs=g_embedding)
or via sub-classing of the message passing base layer. Where only message_function
and update_nodes
must be implemented:
from kgcnn.layers.message import MessagePassingBase
from kgcnn.layers.modules import Dense, LazyConcatenate
class MyMessageNN(MessagePassingBase):
def __init__(self, units, **kwargs):
super(MyMessageNN, self).__init__(**kwargs)
self.dense = Dense(units)
self.add = LazyConcatenate(axis=-1)
def message_function(self, inputs, **kwargs):
n_in, n_out, edges = inputs
return self.dense(n_out)
def update_nodes(self, inputs, **kwargs):
nodes, nodes_update = inputs
return self.add([nodes, nodes_update])
The following models, proposed in literature, have a module in literature. The module usually exposes a make_model
function
to create a tf.keras.models.Model
, which features ragged tensor in- or output. The models can but must not be build completely from kgcnn.layers
and can for example include
original implementations (with proper licencing).
- GCN: Semi-Supervised Classification with Graph Convolutional Networks by Kipf et al. (2016)
- Schnet: SchNet – A deep learning architecture for molecules and materials by Schütt et al. (2017)
- GAT: Graph Attention Networks by Veličković et al. (2018)
- GraphSAGE: Inductive Representation Learning on Large Graphs by Hamilton et al. (2017)
- DimeNetPP: Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules by Klicpera et al. (2020)
- GNNExplainer: GNNExplainer: Generating Explanations for Graph Neural Networks by Ying et al. (2019)
- AttentiveFP: Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism by Xiong et al. (2019)
... and many more (click to expand).
- INorp: Interaction Networks for Learning about Objects,Relations and Physics by Battaglia et al. (2016)
- Megnet: Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals by Chen et al. (2019)
- NMPN: Neural Message Passing for Quantum Chemistry by Gilmer et al. (2017)
- Unet: Graph U-Nets by H. Gao and S. Ji (2019)
- GATv2: How Attentive are Graph Attention Networks? by Brody et al. (2021)
- GIN: How Powerful are Graph Neural Networks? by Xu et al. (2019)
- PAiNN: Equivariant message passing for the prediction of tensorial properties and molecular spectra by Schütt et al. (2020)
- DMPNN: Analyzing Learned Molecular Representations for Property Prediction by Yang et al. (2019)
- HamNet: HamNet: Conformation-Guided Molecular Representation with Hamiltonian Neural Networks by Li et al. (2021)
- CGCNN: Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties by Xie et al. (2018)
- CMPNN: Communicative Representation Learning on Attributed Molecular Graphs by Song et al. (2020)
- EGNN: E(n) Equivariant Graph Neural Networks by Satorras et al. (2021)
- MAT: Molecule Attention Transformer by Maziarka et al. (2020)
- MXMNet: Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures by Zhang et al. (2020)
- RGCN: Modeling Relational Data with Graph Convolutional Networks by Schlichtkrull et al. (2017)
- GNNFilm: GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation by Marc Brockschmidt (2020)
- HDNNP2nd: Atom-centered symmetry functions for constructing high-dimensional neural network potentials by Jörg Behler (2011)
- HDNNP4th: A fourth-generation high-dimensional neural network potential with accurate electrostatics including non-local charge transfer by Ko et al. (2021)
- DGIN: Improved Lipophilicity and Aqueous Solubility Prediction with Composite Graph Neural Networks by Wieder et al. (2021)
- MoGAT: Multi-order graph attention network for water solubility prediction and interpretation by Lee et al. (2023)
- rGIN Random Features Strengthen Graph Neural Networks by Sato et al. (2020)
How to construct ragged tensors is shown above.
Moreover, some data handling classes are given in kgcnn.data
.
Graphs are represented by a dictionary GraphDict
of (numpy) arrays which behaves like a python dict
.
There are graph pre- and postprocessors in kgcnn.graph
which take specific properties by name and apply a
processing function or transformation.
from kgcnn.data.base import GraphDict
# Single graph.
graph = GraphDict({"edge_indices": [[1, 0], [0, 1]], "node_label": [[0], [1]]})
graph.set("graph_labels", [0]) # use set(), get() to assign (tensor) properties.
graph.set("edge_attributes", [[1.0], [2.0]])
graph.to_networkx()
# Modify with e.g. preprocessor.
from kgcnn.graph.preprocessor import SortEdgeIndices
SortEdgeIndices(edge_indices="edge_indices", edge_attributes="^edge_(?!indices$).*", in_place=True)(graph)
A MemoryGraphList
should behave identical to a python list but contain only GraphDict
items.
from kgcnn.data.base import MemoryGraphList
# List of graph dicts.
graph_list = MemoryGraphList([{"edge_indices": [[0, 1], [1, 0]]}, {"edge_indices": [[0, 0]]}, {}])
graph_list.clean(["edge_indices"]) # Remove graphs without property
graph_list.get("edge_indices") # opposite is set()
# Easily cast to (ragged) tf-tensor; makes copy.
tensor = graph_list.tensor([{"name": "edge_indices", "ragged": True}]) # config of keras `Input` layer
# Or directly modify list.
for i, x in enumerate(graph_list):
x.set("graph_number", [i])
print(len(graph_list), graph_list[:2]) # Also supports indexing lists.
The MemoryGraphDataset
inherits from MemoryGraphList
but must be initialized with file information on disk that points to a data_directory
for the dataset.
The data_directory
can have a subdirectory for files and/or single file such as a CSV file:
├── data_directory
├── file_directory
│ ├── *.*
│ └── ...
├── file_name
└── dataset_name.kgcnn.pickle
A base dataset class is created with path and name information:
from kgcnn.data.base import MemoryGraphDataset
dataset = MemoryGraphDataset(data_directory="ExampleDir/",
dataset_name="Example",
file_name=None, file_directory=None)
dataset.save() # opposite is load().
The subclasses QMDataset
, MoleculeNetDataset
, CrystalDataset
, VisualGraphDataset
and GraphTUDataset
further have functions required for the specific dataset type to convert and process files such as '.txt', '.sdf', '.xyz' etc.
Most subclasses implement prepare_data()
and read_in_memory()
with dataset dependent arguments.
An example for MoleculeNetDataset
is shown below.
For more details find tutorials in notebooks.
from kgcnn.data.moleculenet import MoleculeNetDataset
# File directory and files must exist.
# Here 'ExampleDir' and 'ExampleDir/data.csv' with columns "smiles" and "label".
dataset = MoleculeNetDataset(dataset_name="Example",
data_directory="ExampleDir/",
file_name="data.csv")
dataset.prepare_data(overwrite=True, smiles_column_name="smiles", add_hydrogen=True,
make_conformers=True, optimize_conformer=True, num_workers=None)
dataset.read_in_memory(label_column_name="label", add_hydrogen=False,
has_conformers=True)
In data.datasets there are graph learning benchmark datasets as subclasses which are being downloaded from e.g. popular graph archives like TUDatasets, MatBench or MoleculeNet.
The subclasses GraphTUDataset2020
, MatBenchDataset2020
and MoleculeNetDataset2018
download and read the available datasets by name.
There are also specific dataset subclasses for each dataset to handle additional processing or downloading from individual sources:
from kgcnn.data.datasets.MUTAGDataset import MUTAGDataset
dataset = MUTAGDataset() # inherits from GraphTUDataset2020
Downloaded datasets are stored in ~/.kgcnn/datasets
on your computer. Please remove them manually, if no longer required.
A set of example training can be found in training. Training scripts are configurable with a hyperparameter config file and command line arguments regarding model and dataset.
You can find a table of common benchmark datasets in results.
Some known issues to be aware of, if using and making new models or layers with kgcnn
.
- RaggedTensor can not yet be used as a keras model output (issue), which has been mostly resolved in TF 2.8.
- Using
RaggedTensor
's for arbitrary ragged rank apart fromkgcnn.layers.modules
can cause significant performance decrease. This is due to shape check during add, multiply or concatenate (we think). We therefore use lazy add and concat in thekgcnn.layers.modules
layers or directly operate on the value tensor for possible rank.
If you want to cite this repo, please refer to our paper:
@article{REISER2021100095,
title = {Graph neural networks in TensorFlow-Keras with RaggedTensor representation (kgcnn)},
journal = {Software Impacts},
pages = {100095},
year = {2021},
issn = {2665-9638},
doi = {https://doi.org/10.1016/j.simpa.2021.100095},
url = {https://www.sciencedirect.com/science/article/pii/S266596382100035X},
author = {Patrick Reiser and Andre Eberhard and Pascal Friederich}
}