LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory
2026-03-03 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionMachine Learning
AI summaryⓘ
The authors introduce LoGeR, a new method for creating detailed 3D models from very long video sequences without extra optimization steps. It works by dividing videos into smaller parts and using smart memory techniques to keep models accurate and consistent over time. Their approach combines a learning-based memory system and attention methods to handle both local details and long-range alignment. Tests show LoGeR performs much better than previous methods on standard datasets and can handle videos with thousands of frames smoothly.
3D reconstructionfeedforward modelsvideo sequencesattention mechanismmemory moduleTest-Time TrainingSliding Window AttentionKITTI datasetscale driftglobal coordinate frame
Authors
Junyi Zhang, Charles Herrmann, Junhwa Hur, Chen Sun, Ming-Hsuan Yang, Forrester Cole, Trevor Darrell, Deqing Sun
Abstract
Feedforward geometric foundation models achieve strong short-window reconstruction, yet scaling them to minutes-long videos is bottlenecked by quadratic attention complexity or limited effective memory in recurrent designs. We present LoGeR (Long-context Geometric Reconstruction), a novel architecture that scales dense 3D reconstruction to extremely long sequences without post-optimization. LoGeR processes video streams in chunks, leveraging strong bidirectional priors for high-fidelity intra-chunk reasoning. To manage the critical challenge of coherence across chunk boundaries, we propose a learning-based hybrid memory module. This dual-component system combines a parametric Test-Time Training (TTT) memory to anchor the global coordinate frame and prevent scale drift, alongside a non-parametric Sliding Window Attention (SWA) mechanism to preserve uncompressed context for high-precision adjacent alignment. Remarkably, this memory architecture enables LoGeR to be trained on sequences of 128 frames, and generalize up to thousands of frames during inference. Evaluated across standard benchmarks and a newly repurposed VBR dataset with sequences of up to 19k frames, LoGeR substantially outperforms prior state-of-the-art feedforward methods--reducing ATE on KITTI by over 74%--and achieves robust, globally consistent reconstruction over unprecedented horizons.