Towards Multimodal Lifelong Understanding: A Dataset and Agentic Baseline
2026-03-05 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors created a large video dataset called MM-Lifelong that captures everyday activities over different time spans like days, weeks, and months. They found that current AI models struggle with understanding long videos either because they forget important details (memory bottleneck) or get lost in the timeline (localization collapse). To fix this, they developed a new model named ReMA that better manages memory by updating its understanding step-by-step. They also made special data splits to help test how well future models can handle new or unusual situations.
multimodal learninglifelong learningworking memory bottleneckglobal localization collapserecursive belief statememory managementvideo datasetsout-of-distribution generalizationagentic modelstemporal reasoning
Authors
Guo Chen, Lidong Lu, Yicheng Liu, Liangrui Dong, Lidong Zou, Jixin Lv, Zhenquan Li, Xinyi Mao, Baoqi Pei, Shihao Wang, Zhiqi Li, Karan Sapra, Fuxiao Liu, Yin-Dong Zheng, Yifei Huang, Limin Wang, Zhiding Yu, Andrew Tao, Guilin Liu, Tong Lu
Abstract
While datasets for video understanding have scaled to hour-long durations, they typically consist of densely concatenated clips that differ from natural, unscripted daily life. To bridge this gap, we introduce MM-Lifelong, a dataset designed for Multimodal Lifelong Understanding. Comprising 181.1 hours of footage, it is structured across Day, Week, and Month scales to capture varying temporal densities. Extensive evaluations reveal two critical failure modes in current paradigms: end-to-end MLLMs suffer from a Working Memory Bottleneck due to context saturation, while representative agentic baselines experience Global Localization Collapse when navigating sparse, month-long timelines. To address this, we propose the Recursive Multimodal Agent (ReMA), which employs dynamic memory management to iteratively update a recursive belief state, significantly outperforming existing methods. Finally, we establish dataset splits designed to isolate temporal and domain biases, providing a rigorous foundation for future research in supervised learning and out-of-distribution generalization.