Paper
Publication
Message passing all the way up
Abstract

The message passing framework is the foundation of the immense success enjoyed by graph neural networks (GNNs) in recent years. It relies on a simple concept---pairs of nodes in a graph exchange vector-based messages, with every node aggregating the messages it receives in order to update its representation. In spite of its elegance, there exist many problems it provably cannot solve over given input graphs. This has led to a surge of research on going "beyond message passing", seeking to construct GNNs which do not suffer from those limitations---a term which has become ubiquitous in regular discourse. However, have those methods truly moved beyond message passing? In this position paper, I argue about the hidden dangers of using this term---especially when teaching graph representation learning to newcomers. I show that any function of interest we want to compute over graphs can, in all likelihood, be expressed using pairwise message passing -- just over a potentially modified graph, and argue how most practical implementations subtly do this kind of trick anyway. Accordingly, and to hopefully start a productive discussion in the field, I propose that we replace "beyond message passing" with a more tame term, "augmented message passing".

Authors' notes