HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification

2026-03-16Machine Learning

Machine Learning
AI summary

The authors created HorizonMath, a test set of over 100 mostly unsolved math problems that need real insight to solve but are easy to check automatically. This helps see if AI models can do new math research, not just remember old answers. They found that most top models scored near zero, but GPT 5.4 Pro suggested better solutions for two problems, which might be new discoveries once experts check them. HorizonMath is also a shared tool to encourage more AI attempts at these hard problems.

artificial intelligencelarge language modelsmathematical reasoningbenchmarkunsolved problemsautomated verificationcomputational mathematicsnovel researchGPT-5proof verification
Authors
Erik Y. Wang, Sumeet Motwani, James V. Roggeveen, Eliot Hodges, Dulhan Jayalath, Charles London, Kalyan Ramakrishnan, Flaviu Cipcigan, Philip Torr, Alessandro Abate
Abstract
Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.