VISion On Request: Enhanced VLLM efficiency with sparse, dynamically selected, vision-language interactions

2026-03-24Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summary

The authors challenge the common method of making large vision-language models faster by cutting down image details, which can hurt performance on tricky tasks. Instead, they introduce VISOR, a new approach that keeps all the image information but selectively focuses the model's attention on important parts. They train one model that can adjust how much it pays attention based on how hard each image is, saving computation when possible. Their tests show VISOR works well and is efficient, especially on tasks needing fine visual understanding.

Large Vision-Language Models (LVLMs)Visual token reductionCross-attentionSelf-attentionHigh-resolution visual tokensInference costDynamic computation allocationVisual reasoningComputational efficiencyAttention layers
Authors
Adrian Bulat, Alberto Baldrati, Ioannis Maniadis Metaxas, Yassine Ouali, Georgios Tzimiropoulos
Abstract
Existing approaches for improving the efficiency of Large Vision-Language Models (LVLMs) are largely based on the concept of visual token reduction. This approach, however, creates an information bottleneck that impairs performance, especially on challenging tasks that require fine-grained understanding and reasoning. In this work, we challenge this paradigm by introducing VISion On Request (VISOR), a method that reduces inference cost without discarding visual information. Instead of compressing the image, VISOR improves efficiency by sparsifying the interaction between image and text tokens. Specifically, the language model attends to the full set of high-resolution visual tokens through a small, strategically placed set of attention layers: general visual context is provided by efficient cross-attention between text-image, while a few well-placed and dynamically selected self-attention layers refine the visual representations themselves, enabling complex, high-resolution reasoning when needed. Based on this principle, we first train a single universal network on a range of computational budgets by varying the number of self-attention layers, and then introduce a lightweight policy mechanism that dynamically allocates visual computation based on per-sample complexity. Extensive experiments show that VISOR drastically reduces computational cost while matching or exceeding state-of-the-art results across a diverse suite of benchmarks, and excels in challenging tasks that require detailed visual understanding.