A set of layers for graph convolutions in TensorFlow Keras that use RaggedTensors.
- General
- Requirements
- Installation
- Documentation
- Implementation details
- Literature
- Datasets
- Examples
- Issues
- Citing
- References
The package in kgcnn contains several layer classes to build up graph convolution models. Some models are given as an example. A documentation is generated in docs. This repo is still under construction. Any comments, suggestions or help are very welcome!
For kgcnn, usually the latest version of tensorflow is required, but is listed as extra requirements in the setup.py
for simplicity.
Additional python packages are placed in the setup.py
requirements and are installed automatically.
- tensorflow>=2.4.1
Clone repository https://github.com/aimat-lab/gcnn_keras and install with editable mode:
pip install -e ./gcnn_keras
or latest release via Python Package Index.
pip install kgcnn
Auto-documentation is generated at https://kgcnn.readthedocs.io/en/latest/index.html .
The most frequent usage for graph convolutions is either node or graph classification. As for their size, either a single large graph, e.g. citation network or small (batched) graphs like molecules have to be considered. Graphs can be represented by an index list of connections plus feature information. Typical quantities in tensor format to describe a graph are listed below.
nodes
: Node-list of shape(batch, N, F)
whereN
is the number of nodes andF
is the node feature dimension.edges
: Edge-list of shape(batch, M, F)
whereM
is the number of edges andF
is the edge feature dimension.indices
: Connection-list of shape(batch, M, 2)
whereM
is the number of edges. The indices denote a connection of incoming i and outgoing j node as(i,j)
.state
: Graph state information of shape(batch, F)
whereF
denotes the feature dimension.
A major issue for graphs is their flexible size and shape, when using mini-batches. Here, for a graph implementation in the spirit of keras, the batch dimension should be kept also in between layers. This is realized by using RaggedTensor
.
In order to input batched tensors of variable length with keras, either zero-padding plus masking or ragged and sparse tensors can be used. Morover for more flexibility, a dataloader from tf.keras.utils.Sequence
is often used to input disjoint graph representations. Tools for converting numpy or scipy arrays are found in utils.
Here, for ragged tensors, the nodelist of shape (batch, None, F)
and edgelist of shape (batch, None, Fe)
have one ragged dimension (None, )
.
The graph structure is represented by an index-list of shape (batch, None, 2)
with index of incoming i
and outgoing j
node as (i,j)
.
The first index of incoming node i
is usually expected to be sorted for faster pooling operations, but can also be unsorted (see layer arguments). Furthermore, the graph is directed, so an additional edge with (j,i)
is required for undirected graphs. A ragged constant can be directly obtained from a list of numpy arrays: tf.ragged.constant(indices,ragged_rank=1,inner_shape=(2,))
which yields shape (batch, None, 2)
.
Models can be set up in a functional way. Example message passing from fundamental operations:
import tensorflow.keras as ks
from kgcnn.layers.gather import GatherNodes
from kgcnn.layers.keras import Dense, Concatenate # ragged support
from kgcnn.layers.pooling import PoolingLocalMessages, PoolingNodes
n = ks.layers.Input(shape=(None, 3), name='node_input', dtype="float32", ragged=True)
ei = ks.layers.Input(shape=(None, 2), name='edge_index_input', dtype="int64", ragged=True)
n_in_out = GatherNodes()([n, ei])
node_messages = Dense(10, activation='relu')(n_in_out)
node_updates = PoolingLocalMessages()([n, node_messages, ei])
n_node_updates = Concatenate(axis=-1)([n, node_updates])
n_embedd = Dense(1)(n_node_updates)
g_embedd = PoolingNodes()(n_embedd)
message_passing = ks.models.Model(inputs=[n, ei], outputs=g_embedd)
A version of the following models are implemented in literature:
- GCN: Semi-Supervised Classification with Graph Convolutional Networks by Kipf et al. (2016)
- INorp: Interaction Networks for Learning about Objects,Relations and Physics by Battaglia et al. (2016)
- Megnet: Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals by Chen et al. (2019)
- NMPN: Neural Message Passing for Quantum Chemistry by Gilmer et al. (2017)
- Schnet: SchNet – A deep learning architecture for molecules and materials by Schütt et al. (2017)
- Unet: Graph U-Nets by H. Gao and S. Ji (2019)
- GNNExplainer: GNNExplainer: Generating Explanations for Graph Neural Networks by Ying et al. (2019)
- GraphSAGE: Inductive Representation Learning on Large Graphs by Hamilton et al. (2017)
- GAT: Graph Attention Networks by Veličković et al. (2018)
- DimeNetPP: Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules by Klicpera et al. (2020)
- AttentiveFP: Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism by Xiong et al. (2019)
- GIN: How Powerful are Graph Neural Networks? by Xu et al. (2019)
- PAiNN: Equivariant message passing for the prediction of tensorial properties and molecular spectra by Schütt et al. (2020)
In data there are simple data handling tools that are used for examples, which includes loading datasets.
A set of example training can be found in example
Some known issues to be aware of, if using and making new models or layers with kgcnn
.
- RaggedTensor can not yet be used as a keras model output (tensorflow/tensorflow#42320), which means only padded tensors can be used for batched node embedding tasks.
- Using
RaggedTensor
's for arbitrary ragged rank apart fromkgcnn.layers.keras
can cause significant performance decrease.
If you want to cite this repo, refer to our paper:
@article{REISER2021100095,
title = {Graph neural networks in TensorFlow-Keras with RaggedTensor representation (kgcnn)},
journal = {Software Impacts},
pages = {100095},
year = {2021},
issn = {2665-9638},
doi = {https://doi.org/10.1016/j.simpa.2021.100095},
url = {https://www.sciencedirect.com/science/article/pii/S266596382100035X},
author = {Patrick Reiser and Andre Eberhard and Pascal Friederich}
}