Towards Long-Form Spatio-Temporal Video Grounding

2026-02-26Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors study how to find things mentioned in text within very long videos, which is hard because these videos have many unrelated parts and last a long time. They created a new method called ART-STVG that looks at video frames one by one, instead of all at once, making it easier to handle long videos. Their approach uses special memory components to remember important visual and timing details, and they select useful memories to improve accuracy. They also connect the steps that find where the object is in space and when it appears over time for better results. Tests show their method works better than previous ones, especially on longer videos.

spatio-temporal video groundinglong-form videotransformer architecturememory banksautoregressive processingspatial localizationtemporal localizationsequence modelingvideo understandingmultimodal learning
Authors
Xin Gu, Bing Fan, Jiali Yao, Zhipeng Zhang, Yan Huang, Cheng Han, Heng Fan, Libo Zhang
Abstract
In real scenarios, videos can span several minutes or even hours. However, existing research on spatio-temporal video grounding (STVG), given a textual query, mainly focuses on localizing targets in short videos of tens of seconds, typically less than one minute, which limits real-world applications. In this paper, we explore Long-Form STVG (LF-STVG), which aims to locate targets in long-term videos. Compared with short videos, long-term videos contain much longer temporal spans and more irrelevant information, making it difficult for existing STVG methods that process all frames at once. To address this challenge, we propose an AutoRegressive Transformer architecture for LF-STVG, termed ART-STVG. Unlike conventional STVG methods that require the entire video sequence to make predictions at once, ART-STVG treats the video as streaming input and processes frames sequentially, enabling efficient handling of long videos. To model spatio-temporal context, we design spatial and temporal memory banks and apply them to the decoders. Since memories from different moments are not always relevant to the current frame, we introduce simple yet effective memory selection strategies to provide more relevant information to the decoders, significantly improving performance. Furthermore, instead of parallel spatial and temporal localization, we propose a cascaded spatio-temporal design that connects the spatial decoder to the temporal decoder, allowing fine-grained spatial cues to assist complex temporal localization in long videos. Experiments on newly extended LF-STVG datasets show that ART-STVG significantly outperforms state-of-the-art methods, while achieving competitive performance on conventional short-form STVG.