HALP: Detecting Hallucinations in Vision-Language Models without Generating a Single Token

2026-03-05Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors explore how to predict when vision-language models will say things that aren't true, called hallucinations, before the models actually produce any text. They study information inside the model at different steps and find that some parts are better at spotting hallucination risk than others. Their method works well across many models and could help stop mistakes early, making models safer and more efficient. This means problems can be caught without having to wait for the model to output words.

vision-language modelshallucinationinternal representationsAUROCmultimodal fusiontext decodertokenprobemodel safety
Authors
Sai Akhil Kogilathota, Sripadha Vallabha E G, Luzhe Sun, Jiawei Zhou
Abstract
Hallucinations remain a persistent challenge for vision-language models (VLMs), which often describe nonexistent objects or fabricate facts. Existing detection methods typically operate after text generation, making intervention both costly and untimely. We investigate whether hallucination risk can instead be predicted before any token is generated by probing a model's internal representations in a single forward pass. Across a diverse set of vision-language tasks and eight modern VLMs, including Llama-3.2-Vision, Gemma-3, Phi-4-VL, and Qwen2.5-VL, we examine three families of internal representations: (i) visual-only features without multimodal fusion, (ii) vision-token representations within the text decoder, and (iii) query-token representations that integrate visual and textual information before generation. Probes trained on these representations achieve strong hallucination-detection performance without decoding, reaching up to 0.93 AUROC on Gemma-3-12B, Phi-4-VL 5.6B, and Molmo 7B. Late query-token states are the most predictive for most models, while visual or mid-layer features dominate in a few architectures (e.g., ~0.79 AUROC for Qwen2.5-VL-7B using visual-only features). These results demonstrate that (1) hallucination risk is detectable pre-generation, (2) the most informative layer and modality vary across architectures, and (3) lightweight probes have the potential to enable early abstention, selective routing, and adaptive decoding to improve both safety and efficiency.