Spa3R: Predictive Spatial Field Modeling for 3D Visual Reasoning
2026-02-24 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors show that models can learn to understand 3D space just from 2D pictures taken from different angles, without extra 3D data or complicated instructions. They created Spa3R, a system that predicts how a scene looks from new viewpoints using a compact hidden code, helping the model grasp the full 3D structure internally. By adding Spa3R to existing vision-language models, they made Spa3-VLM, which better understands and answers questions about 3D scenes. Their experiments show this method improves 3D visual reasoning compared to previous approaches.
Vision-Language Models3D Spatial UnderstandingSelf-Supervised LearningMulti-View ImagesFeature FieldsLatent Representation3D Visual Question AnsweringSpatial IntelligencePredictive Spatial Field ModelingModel Adaptation
Authors
Haoyi Jiang, Liu Liu, Xinjie Wang, Yonghao He, Wei Sui, Zhizhong Su, Wenyu Liu, Xinggang Wang
Abstract
While Vision-Language Models (VLMs) exhibit exceptional 2D visual understanding, their ability to comprehend and reason about 3D space--a cornerstone of spatial intelligence--remains superficial. Current methodologies attempt to bridge this domain gap either by relying on explicit 3D modalities or by augmenting VLMs with partial, view-conditioned geometric priors. However, such approaches hinder scalability and ultimately burden the language model with the ill-posed task of implicitly reconstructing holistic 3D geometry from sparse cues. In this paper, we argue that spatial intelligence can emerge inherently from 2D vision alone, rather than being imposed via explicit spatial instruction tuning. To this end, we introduce Spa3R, a self-supervised framework that learns a unified, view-invariant spatial representation directly from unposed multi-view images. Spa3R is built upon the proposed Predictive Spatial Field Modeling (PSFM) paradigm, where Spa3R learns to synthesize feature fields for arbitrary unseen views conditioned on a compact latent representation, thereby internalizing a holistic and coherent understanding of the underlying 3D scene. We further integrate the pre-trained Spa3R Encoder into existing VLMs via a lightweight adapter to form Spa3-VLM, effectively grounding language reasoning in a global spatial context. Experiments on the challenging VSI-Bench demonstrate that Spa3-VLM achieves state-of-the-art accuracy of 58.6% on 3D VQA, significantly outperforming prior methods. These results highlight PSFM as a scalable path toward advancing spatial intelligence. Code is available at https://github.com/hustvl/Spa3R.