Reasoning Core: A Scalable Procedural Data Generation Suite for Symbolic Pre-training and Post-Training
2026-03-02 • Computation and Language
Computation and Language
AI summaryⓘ
The authors created Reasoning Core, a tool that automatically produces challenges in different logical and reasoning areas to help train language models better. These challenges come with correct answers verified by external solvers, making sure the training data is reliable. They can adjust difficulty and include step-by-step solutions to teach models how to reason from the start. Their tests show that adding this data helps models think better without hurting their language skills, and big models like GPT-5 still find these tasks tough. The authors also shared their code and data openly for others to use.
language modelsprocedural generationsymbolic reasoningPDDL planningfirst-order logiccontext-free grammarBayesian networkssystems of equationscurriculum learningreinforcement learning
Authors
Valentin Lacombe, Valentin Quesnel, Damien Sileo
Abstract
Training on verifiable symbolic data is a promising way to expand the reasoning frontier of language models beyond what standard pre-training corpora provide. Yet existing procedural generators often rely on fixed puzzles or templates and do not deliver the distributional breadth needed at scale. We introduce Reasoning Core, a scalable suite that procedurally generates verifiable symbolic reasoning data across core formal domains: PDDL planning over randomized domains, first-order logic with equality, context-free grammar parsing and generation, causal reasoning over random Bayesian networks, and systems of equations. Each task is paired with an external solver for rigorous verification and admits continuous difficulty control for curriculum design. Examples can optionally include solver-derived reasoning traces, enabling supervised training from the earliest pre-training stages, and the same interface provides verifiable reward functions for reinforcement learning. Our experiments show that mixing Reasoning Core data into pre-training improves downstream reasoning while preserving, or slightly improving, language modeling quality. Zero-shot evaluations confirm these tasks challenge frontier models such as GPT-5. The code and data are publicly available under the MIT license.