M^3: Dense Matching Meets Multi-View Foundation Models for Monocular Gaussian Splatting SLAM
2026-03-17 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address the challenge of creating 3D reconstructions from regular video that hasn’t been pre-calibrated, which requires both accurate camera positioning and fast updating in changing scenes. They developed a system called M³ that improves how different video frames match up by adding a special matching component to existing multi-view models. This new method helps the system keep track more reliably and aligns images better during processing. Their tests show that M³ performs better than previous methods in both estimating camera positions and reconstructing scenes, especially on well-known indoor and outdoor datasets.
monocular videopose estimation3D reconstructionmulti-view foundation modelSLAMGaussian Splattingpixel correspondencesdynamic area suppressionintrinsic alignmentScanNet++
Authors
Kerui Ren, Guanghao Li, Changjian Jiang, Yingxiang Xu, Tao Lu, Linning Xu, Junting Dong, Jiangmiao Pang, Mulin Yu, Bo Dai
Abstract
Streaming reconstruction from uncalibrated monocular video remains challenging, as it requires both high-precision pose estimation and computationally efficient online refinement in dynamic environments. While coupling 3D foundation models with SLAM frameworks is a promising paradigm, a critical bottleneck persists: most multi-view foundation models estimate poses in a feed-forward manner, yielding pixel-level correspondences that lack the requisite precision for rigorous geometric optimization. To address this, we present M^3, which augments the Multi-view foundation model with a dedicated Matching head to facilitate fine-grained dense correspondences and integrates it into a robust Monocular Gaussian Splatting SLAM. M^3 further enhances tracking stability by incorporating dynamic area suppression and cross-inference intrinsic alignment. Extensive experiments on diverse indoor and outdoor benchmarks demonstrate state-of-the-art accuracy in both pose estimation and scene reconstruction. Notably, M^3 reduces ATE RMSE by 64.3% compared to VGGT-SLAM 2.0 and outperforms ARTDECO by 2.11 dB in PSNR on the ScanNet++ dataset.