Understanding the Role of Hallucination in Reinforcement Post-Training of Multimodal Reasoning Models

2026-04-03Machine Learning

Machine LearningArtificial IntelligenceComputer Vision and Pattern Recognition
AI summary

The authors studied how reinforcement learning (RL) affects multimodal large language models that handle both text and images. They created a method called the Hallucination-as-Cue Framework to test if models really learn from visual data or just guess when information is missing. By deliberately removing or changing important visual details, they found that RL-trained models still got better at reasoning, even when they had to rely on hallucinations. This suggests that RL helps models in ways beyond just understanding images, challenging common beliefs about training these models.

reinforcement learningmultimodal large language modelsvisual reasoningmodel hallucinationpost-traininghallucination-as-cue frameworkmodality-specific corruptiontraining dynamicsbenchmark evaluationreasoning performance
Authors
Gengwei Zhang, Jie Peng, Zhen Tan, Mufan Qiu, Hossein Nourkhiz Mahjoub, Vaishnav Tadiparthi, Kwonjoon Lee, Yanyong Zhang, Tianlong Chen
Abstract
The recent success of reinforcement learning (RL) in large reasoning models has inspired the growing adoption of RL for post-training Multimodal Large Language Models (MLLMs) to enhance their visual reasoning capabilities. Although many studies have reported improved performance, it remains unclear whether RL training truly enables models to learn from visual information. In this work, we propose the Hallucination-as-Cue Framework, an analytical framework designed to investigate the effects of RL-based post-training on multimodal reasoning models from the perspective of model hallucination. Specifically, we introduce hallucination-inductive, modality-specific corruptions that remove or replace essential information required to derive correct answers, thereby forcing the model to reason by hallucination. By applying these corruptions during both training and evaluation, our framework provides a unique perspective for diagnosing RL training dynamics and understanding the intrinsic properties of datasets. Through extensive experiments and analyses across multiple multimodal reasoning benchmarks, we reveal that the role of model hallucination for RL-training is more significant than previously recognized. For instance, we find that RL post-training under purely hallucination-inductive settings can still significantly improve models' reasoning performance, and in some cases even outperform standard training. These findings challenge prevailing assumptions about MLLM reasoning training and motivate the development of more modality-aware RL-based training designs.