AI summaryⓘ
The authors study how large vision-language models (LVLMs) often give wrong answers confidently because their usual confidence scores mix up mistakes from seeing the picture with mistakes from thinking about it. To fix this, they created VL-Calibration, a method that separates visual confidence (how sure the model is about what it sees) from reasoning confidence (how sure it is about its thinking). They estimate visual confidence without needing extra labels by checking how much the model’s image understanding changes under small tweaks and how uncertain its word predictions are. Their tests show that this approach reduces overconfidence in wrong answers and improves the model’s accuracy on many datasets.
Large Vision Language ModelsMultimodal ReasoningConfidence CalibrationVisual UncertaintyReinforcement LearningVisual GroundingKL-DivergenceToken EntropyOut-of-distributionHallucination
Authors
Wenyi Xiao, Xinchi Xu, Leilei Gan
Abstract
Large Vision Language Models (LVLMs) achieve strong multimodal reasoning but frequently exhibit hallucinations and incorrect responses with high certainty, which hinders their usage in high-stakes domains. Existing verbalized confidence calibration methods, largely developed for text-only LLMs, typically optimize a single holistic confidence score using binary answer-level correctness. This design is mismatched to LVLMs: an incorrect prediction may arise from perceptual failures or from reasoning errors given correct perception, and a single confidence conflates these sources while visual uncertainty is often dominated by language priors. To address these issues, we propose VL-Calibration, a reinforcement learning framework that explicitly decouples confidence into visual and reasoning confidence. To supervise visual confidence without ground-truth perception labels, we introduce an intrinsic visual certainty estimation that combines (i) visual grounding measured by KL-divergence under image perturbations and (ii) internal certainty measured by token entropy. We further propose token-level advantage reweighting to focus optimization on tokens based on visual certainty, suppressing ungrounded hallucinations while preserving valid perception. Experiments on thirteen benchmarks show that VL-Calibration effectively improves calibration while boosting visual reasoning accuracy, and it generalizes to out-of-distribution benchmarks across model scales and architectures.