The Dual Mechanisms of Spatial Reasoning in Vision-Language Models

2026-03-23Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMachine Learning
AI summary

The authors studied how vision-language models (VLMs) understand where objects are in relation to each other in images. They found that while the language part of the model tries to represent spatial relations, the main source of spatial understanding actually comes from the vision encoder, which processes the layout of objects and background together. Improving these vision-based spatial signals helps the model do better at tasks that require reasoning about object positions. Overall, the authors show that vision encoders play the key role in how VLMs figure out spatial relationships.

vision-language modelsspatial relationsvision encoderlanguage model backbonemultimodal tasksimage captioningvisual question answeringspatial reasoningvisual tokensobject layout
Authors
Kelly Cui, Nikhil Prakash, Ayush Raina, David Bau, Antonio Torralba, Tamar Rott Shaham
Abstract
Many multimodal tasks, such as image captioning and visual question answering, require vision-language models (VLMs) to associate objects with their properties and spatial relations. Yet it remains unclear where and how such associations are computed within VLMs. In this work, we show that VLMs rely on two concurrent mechanisms to represent such associations. In the language model backbone, intermediate layers represent content-independent spatial relations on top of visual tokens corresponding to objects. However, this mechanism plays only a secondary role in shaping model predictions. Instead, the dominant source of spatial information originates in the vision encoder, whose representations encode the layout of objects and are directly exploited by the language model backbone. Notably, this spatial signal is distributed globally across visual tokens, extending beyond object regions into surrounding background areas. We show that enhancing these vision-derived spatial representations globally across all image tokens improves spatial reasoning performance on naturalistic images. Together, our results clarify how spatial association is computed within VLMs and highlight the central role of vision encoders in enabling spatial reasoning.