Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting
2026-03-19 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors focus on Bird's-Eye-View (BEV) perception, which helps self-driving cars understand their surroundings in a top-down view. Unlike previous methods that treat the process as a black box, the authors propose Splat2BEV, which explicitly reconstructs 3D scenes using a technique called Gaussian splatting before projecting features into BEV. This approach provides clearer geometric information, improving accuracy for tasks like object detection and segmentation. Their tests on popular datasets show better performance compared to existing methods.
Bird's-Eye-View (BEV)Gaussian Splatting3D ReconstructionSemantic Segmentation3D Object DetectionMulti-view ImagesAutonomous DrivingFeature ProjectionEnd-to-End Training
Authors
Yiren Lu, Xin Ye, Burhaneddin Yaman, Jingru Luo, Zhexiao Xiong, Liu Ren, Yu Yin
Abstract
Bird's-Eye-View (BEV) perception serves as a cornerstone for autonomous driving, offering a unified spatial representation that fuses surrounding-view images to enable reasoning for various downstream tasks, such as semantic segmentation, 3D object detection, and motion prediction. However, most existing BEV perception frameworks adopt an end-to-end training paradigm, where image features are directly transformed into the BEV space and optimized solely through downstream task supervision. This formulation treats the entire perception process as a black box, often lacking explicit 3D geometric understanding and interpretability, leading to suboptimal performance. In this paper, we claim that an explicit 3D representation matters for accurate BEV perception, and we propose Splat2BEV, a Gaussian Splatting-assisted framework for BEV tasks. Splat2BEV aims to learn BEV feature representations that are both semantically rich and geometrically precise. We first pre-train a Gaussian generator that explicitly reconstructs 3D scenes from multi-view inputs, enabling the generation of geometry-aligned feature representations. These representations are then projected into the BEV space to serve as inputs for downstream tasks. Extensive experiments on nuScenes and argoverse dataset demonstrate that Splat2BEV achieves state-of-the-art performance and validate the effectiveness of incorporating explicit 3D reconstruction into BEV perception.