NoLan: Mitigating Object Hallucinations in Large Vision-Language Models via Dynamic Suppression of Language Priors
2026-02-25 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and Language
AI summaryⓘ
The authors studied why Large Vision-Language Models sometimes mention objects that aren't in the pictures they see. They found that the language part of the model, which generates the text, mostly causes this problem because it relies too much on what it expects rather than what the image shows. To fix this, they created a method called NoLan that reduces these wrong guesses without extra training. Their tests showed NoLan helps models make fewer mistakes when describing images.
Large Vision-Language ModelsObject HallucinationVision EncoderLanguage DecoderMultimodal ModelsText GenerationLanguage PriorOutput DistributionModel DecodingModel Evaluation
Authors
Lingfeng Ren, Weihao Yu, Runpeng Yu, Xinchao Wang
Abstract
Object hallucination is a critical issue in Large Vision-Language Models (LVLMs), where outputs include objects that do not appear in the input image. A natural question arises from this phenomenon: Which component of the LVLM pipeline primarily contributes to object hallucinations? The vision encoder to perceive visual information, or the language decoder to generate text responses? In this work, we strive to answer this question through designing a systematic experiment to analyze the roles of the vision encoder and the language decoder in hallucination generation. Our observations reveal that object hallucinations are predominantly associated with the strong priors from the language decoder. Based on this finding, we propose a simple and training-free framework, No-Language-Hallucination Decoding, NoLan, which refines the output distribution by dynamically suppressing language priors, modulated based on the output distribution difference between multimodal and text-only inputs. Experimental results demonstrate that NoLan effectively reduces object hallucinations across various LVLMs on different tasks. For instance, NoLan achieves substantial improvements on POPE, enhancing the accuracy of LLaVA-1.5 7B and Qwen-VL 7B by up to 6.45 and 7.21, respectively. The code is publicly available at: https://github.com/lingfengren/NoLan.