Unifying approach to uniform expressivity of graph neural networks

2026-02-20Machine Learning

Machine LearningArtificial IntelligenceLogic in Computer Science
AI summary

The authors study how powerful Graph Neural Networks (GNNs) are by connecting them to known graph algorithms and logic. They introduce Template GNNs (T-GNNs), a new way to update node features by looking at specific small patterns (templates) in the graph instead of just neighbors or the whole graph. They also create a related logic system and prove that T-GNNs and this logic have the same ability to distinguish graph structures. Their framework helps understand and unify how different GNN models work by showing them as special cases of T-GNNs.

Graph Neural NetworksWeisfeiler-Leman algorithmgraph templatesaggregationmodal logicbisimulationexpressive powernode featuresgraph embeddings
Authors
Huan Luo, Jonni Virtema
Abstract
The expressive power of Graph Neural Networks (GNNs) is often analysed via correspondence to the Weisfeiler-Leman (WL) algorithm and fragments of first-order logic. Standard GNNs are limited to performing aggregation over immediate neighbourhoods or over global read-outs. To increase their expressivity, recent attempts have been made to incorporate substructural information (e.g. cycle counts and subgraph properties). In this paper, we formalize this architectural trend by introducing Template GNNs (T-GNNs), a generalized framework where node features are updated by aggregating over valid template embeddings from a specified set of graph templates. We propose a corresponding logic, Graded template modal logic (GML(T)), and generalized notions of template-based bisimulation and WL algorithm. We establish an equivalence between the expressive power of T-GNNs and GML(T), and provide a unifying approach for analysing GNN expressivity: we show how standard AC-GNNs and its recent variants can be interpreted as instantiations of T-GNNs.