Benchmarking Graph Neural Networks in Solving Hard Constraint Satisfaction Problems

2026-02-20Machine Learning

Machine Learning
AI summary

The authors looked at how well graph neural networks (GNNs) solve really hard optimization problems compared to traditional methods. They created new, tough test problems using ideas from physics to make a fair comparison. Their tests showed that classic algorithms still do better than GNNs for these hard cases. They also talk about why neural networks find these problems challenging and provide their test problems for others to use.

graph neural networksoptimization problemsclassical heuristicsrandom benchmarksstatistical physicsalgorithm performancehard instancesrandom constraint satisfaction problemsbenchmarking
Authors
Geri Skenderi, Lorenzo Buffoni, Francesco D'Amico, David Machado, Raffaele Marino, Matteo Negri, Federico Ricci-Tersenghi, Carlo Lucibello, Maria Chiara Angelini
Abstract
Graph neural networks (GNNs) are increasingly applied to hard optimization problems, often claiming superiority over classical heuristics. However, such claims risk being unsolid due to a lack of standard benchmarks on truly hard instances. From a statistical physics perspective, we propose new hard benchmarks based on random problems. We provide these benchmarks, along with performance results from both classical heuristics and GNNs. Our fair comparison shows that classical algorithms still outperform GNNs. We discuss the challenges for neural networks in this domain. Future claims of superiority can be made more robust using our benchmarks, available at https://github.com/ArtLabBocconi/RandCSPBench.