Beyond Distribution Sharpening: The Importance of Task Rewards

2026-04-17Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors studied how reinforcement learning (RL) improves advanced AI models. They compared two ways RL might help: making the model better at what it already knows (distribution sharpening) or teaching it new skills based on rewards from tasks. Their findings show that simply sharpening has limits and can be unstable, while using task-based rewards helps models learn more reliably and perform better, especially on math problems. They tested this using several AI models and confirmed these results experimentally.

reinforcement learningdistribution sharpeningtask-reward-based learningLlama-3.2-3B-InstructQwen modelsmachine learning stabilityperformance improvementmath datasetslanguage modelstraining paradigms
Authors
Sarthak Mittal, Leo Gagnon, Guillaume Lajoie
Abstract
Frontier models have demonstrated exceptional capabilities following the integration of task-reward-based reinforcement learning (RL) into their training pipelines, enabling systems to evolve from pure reasoning models into sophisticated agents. However, debate persists regarding whether RL genuinely instills new skills within a base model or merely sharpens its existing distribution to elicit latent capabilities. To address this dichotomy, we present an explicit comparison between distribution sharpening and task-reward-based learning, utilizing RL as a tool to implement both paradigms. Our analysis reveals the inherent limitations of distribution sharpening, demonstrating from first principles how and why the optima can be unfavorable and the approach fundamentally unstable. Furthermore, our experiments using Llama-3.2-3B-Instruct, Qwen2.5-3B-Instruct and Qwen3-4B-Instruct-2507 on math datasets confirm that sharpening yields limited gains, whereas incorporating task-based reward signal can greatly help achieve robust performance improvements and stable learning.