FocusGraph: Graph-Structured Frame Selection for Embodied Long Video Question Answering

2026-03-04Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created FocusGraph, a tool to help computers understand long first-person videos better by choosing the most important parts to answer questions. They developed a special model called Scene-Caption LLM Selector that picks out relevant clips using descriptions, not just low-quality video frames. Then, they use another method named Patch-wise Sparse-Flow Retention to find key frames within those clips. This approach makes answering questions faster and more accurate compared to older methods, especially on tricky egocentric video tests.

egocentric videomultimodal large language modelskeyframe selectionscene captioningquestion answeringPatch-wise Sparse-Flow Retentionvideo understandinglong-horizon memoryinference timegraph-based captions
Authors
Tatiana Zemskova, Solomon Andryushenko, Ilya Obrubov, Viktoriia Khoruzhaia, Ekaterina Eroshenko, Ekaterina Derevyanka, Dmitry Yudin
Abstract
The ability to understand long videos is vital for embodied intelligent agents, because their effectiveness depends on how well they can accumulate, organize, and leverage long-horizon perceptual memories. Recently, multimodal LLMs have been gaining popularity for solving the long video understanding task due to their general ability to understand natural language and to leverage world knowledge. However, as the number of frames provided to an MLLM increases, the quality of its responses tends to degrade, and inference time grows. Therefore, when using MLLMs for long video understanding, a crucial step is selecting key frames from the video to answer user queries. In this work, we develop FocusGraph, a framework for keyframe selection for question answering over long egocentric videos. It leverages a lightweight trainable Scene-Caption LLM Selector that selects query-relevant clips based on their graph-based captions, and a training-free method for selecting keyframes from these clips. Unlike existing methods, the proposed Scene-Caption LLM Selector does not rely on the original sequence of low-resolution frames; instead, it operates on a compact textual representation of the scene. We then design a training-free Patch-wise Sparse-Flow Retention (PSFR) method to select keyframes from the resulting sequence of clips, which are fed into an MLLM to produce the final answer. Together, these components enable FocusGraph to achieve state-of-the-art results on challenging egocentric long-video question answering benchmarks, including FindingDory and HourVideo, while significantly reducing inference time relative to baseline approaches.