Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key
2026-05-07 • Artificial Intelligence
Artificial IntelligenceComputation and Language
AI summaryⓘ
The authors created ScaleLogic, a testing environment that lets them control how hard logical problems are by changing the length of the proof and the complexity of the logic used. They found that the amount of training a reinforcement learning model needs grows in a predictable way as problems get harder, and more complex logic makes training require even more effort. When models are trained with more expressive logic, they perform better on math and reasoning tests and learn more efficiently. This pattern holds true across different training methods, and using step-by-step learning makes training more efficient.
Reinforcement learningLarge language modelsLogical reasoningProof planningFirst-order logicExpressivenessScaling lawsCurriculum learningTransfer learningPower law
Authors
Tianle Wang, Zhaoyang Wang, Guangchen Lan, Xinpeng Wei, Sipeng Zhang, Guanwen Qiu, Abulhair Saparov
Abstract
Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute $T$ follows a power law with respect to reasoning depth $D$ ($T \propto D^γ$, $R^{2} > 0.99$), and that the scaling exponent $γ$ increases monotonically with logical expressiveness, from $1.04$ to $2.60$. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to $+10.66$ points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.