Saliency-Aware Multi-Route Thinking: Revisiting Vision-Language Reasoning
2026-02-18 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address challenges in vision-language models (VLMs), which combine images and text to reason together. They found that current methods struggle because visual input is only given once at the start, making later text-based reasoning less accurate and leading to mistakes over time. To improve this, the authors introduce Saliency-Aware Principle (SAP) selection, a method that helps the model revisit visual information during reasoning, making the process more stable and accurate without extra training. Their approach also allows the model to explore different reasoning paths in parallel, reducing errors like hallucinating objects and speeding up responses.
Vision-language modelsVisual groundingAutoregressive generationSaliency-Aware Principle (SAP)Multi-route inferenceObject hallucinationChain-of-Thought reasoningToken-level trajectories
Authors
Mingjia Shi, Yinhan He, Yaochen Zhu, Jundong Li
Abstract
Vision-language models (VLMs) aim to reason by jointly leveraging visual and textual modalities. While allocating additional inference-time computation has proven effective for large language models (LLMs), achieving similar scaling in VLMs remains challenging. A key obstacle is that visual inputs are typically provided only once at the start of generation, while textual reasoning (e.g., early visual summaries) is generated autoregressively, causing reasoning to become increasingly text-dominated and allowing early visual grounding errors to accumulate. Moreover, vanilla guidance for visual grounding during inference is often coarse and noisy, making it difficult to steer reasoning over long texts. To address these challenges, we propose \emph{Saliency-Aware Principle} (SAP) selection. SAP operates on high-level reasoning principles rather than token-level trajectories, which enable stable control over discrete generation under noisy feedback while allowing later reasoning steps to re-consult visual evidence when renewed grounding is required. In addition, SAP supports multi-route inference, enabling parallel exploration of diverse reasoning behaviors. SAP is model-agnostic and data-free, requiring no additional training. Empirical results show that SAP achieves competitive performance, especially in reducing object hallucination, under comparable token-generation budgets while yielding more stable reasoning and lower response latency than CoT-style long sequential reasoning.