Seeing to Ground: Visual Attention for Hallucination-Resilient MDLLMs

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors study why certain AI models that generate text and images together sometimes make mistakes by imagining things that aren’t really there (hallucinations). They find that these errors come from the model relying too much on language patterns without enough checking against the actual visual content. To fix this, the authors propose a method called VISAGE that, during generation, looks at how focused the model’s attention is on different parts of an image to better align text with visuals. Their approach improves accuracy in tests without needing extra training. VISAGE helps the model avoid language shortcuts and produce more visually grounded descriptions.

Multimodal Diffusion Large Language Modelshallucinationmasked decodingcross-attentionvisual groundingproxy objective mismatchspatial entropyinference-time calibrationdecodertoken ranking
Authors
Vishal Narnaware, Animesh Gupta, Kevin Zhai, Zhenyi Wang, Mubarak Shah
Abstract
Multimodal Diffusion Large Language Models (MDLLMs) achieve high-concurrency generation through parallel masked decoding, yet the architectures remain prone to multimodal hallucinations. This structural vulnerability stems from an algorithmic flaw: the decoder ranks candidate tokens based on textual likelihood without verifying localized visual support. We establish that this language-only ranking induces an objective mismatch, where language probability mass acts as a misspecified proxy for the intended multimodal task. Consequently, we reinterpret hallucination as a localized optimization error, a phenomenon where the decoder exploits language shortcuts to maximize a proxy score at the expense of visual grounding. To address this objective mismatch, we introduce VISAGE, a training-free decoding framework that calibrates the objective at inference time. VISAGE estimates the proxy discrepancy by quantifying the spatial entropy of cross-attention distributions. By enforcing a localization consensus across attention heads, the method penalizes spatially uniform distributions and re-ranks token commitments to favor visually grounded outcomes. We provide an analytical stability guarantee establishing that VISAGE maintains a bounded objective loss under estimation error. Evaluations across hallucination-sensitive and general-purpose benchmarks demonstrate the robustness of the framework, yielding relative gains of 8.59% on MMMU-val and 7.75% on HallusionBench.