How Do LLMs and VLMs Understand Viewpoint Rotation Without Vision? An Interpretability Study

2026-04-16Artificial Intelligence

Artificial Intelligence
AI summary

The authors studied how well language models understand spatial changes, like imagining what they would see after turning around, using only text descriptions. They found that current models struggle with these tasks, making many mistakes, while humans easily get them right. By analyzing the models' internal workings, the authors discovered that the models know about the viewpoints but can't correctly link these viewpoints to what should be observed, causing errors. They improved performance by fine-tuning specific parts of the models related to attention, without hurting other skills.

spatial intelligenceviewpoint rotationlanguage modelsvisual-language modelshidden statesattention headscausal interventionfine-tuninghallucination in AIprobing analysis
Authors
Zhen Yang, Ping Jian, Zhongbin Guo, Zuming Zhang, Chengzhi Li, Yonghong Deng, Xinyue Zhang, Wenpeng Lu
Abstract
Over the past year, spatial intelligence has drawn increasing attention. Many prior works study it from the perspective of visual-spatial intelligence, where models have access to visuospatial information from visual inputs. However, in the absence of visual information, whether linguistic intelligence alone is sufficient to endow models with spatial intelligence, and how models perform relevant tasks with text-only inputs still remain unexplored. Therefore, in this paper, we focus on a fundamental and critical capability in spatial intelligence from a linguistic perspective: viewpoint rotation understanding (VRU). Specifically, LLMs and VLMs are asked to infer their final viewpoint and predict the corresponding observation in an environment given textual description of viewpoint rotation and observation over multiple steps. We find that both LLMs and VLMs perform poorly on our proposed dataset while human can easily achieve 100% accuracy, indicating a substantial gap between current model capabilities and the requirements of spatial intelligence. To uncover the underlying mechanisms, we conduct a layer-wise probing analysis and head-wise causal intervention. Our findings reveal that although models encode viewpoint information in the hidden states, they appear to struggle to bind the viewpoint position with corresponding observation, resulting in a hallucination in final layers. Finally, we selectively fine-tune the key attention heads identified by causal intervention to improve VRU performance. Experimental results demonstrate that such selective fine-tuning achieves improved VRU performance while avoiding catastrophic forgetting of generic abilities. Our dataset and code will be released at https://github.com/Young-Zhen/VRU_Interpret .