VideoSeek: Long-Horizon Video Agent with Tool-Guided Seeking

2026-03-20Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and Language
AI summary

The authors developed VideoSeek, a video understanding model that looks for important clues in videos instead of watching every single frame. By doing this smarter searching, VideoSeek uses much fewer frames but still understands and reasons about videos well. It works in a loop of thinking, acting, and observing, which helps it focus on relevant parts of the video. Tests showed VideoSeek is more accurate and efficient than earlier models, like GPT-5, especially on hard video tasks.

video agentic modelsvideo-language tasksgreedy parsingvideo logic flowthink-act-observe loopmulti-granular observationsvideo understandingreasoning benchmarksLVBenchGPT-5
Authors
Jingyang Lin, Jialian Wu, Jiang Liu, Ximeng Sun, Ze Wang, Xiaodong Yu, Jiebo Luo, Zicheng Liu, Emad Barsoum
Abstract
Video agentic models have advanced challenging video-language tasks. However, most agentic approaches still heavily rely on greedy parsing over densely sampled video frames, resulting in high computational cost. We present VideoSeek, a long-horizon video agent that leverages video logic flow to actively seek answer-critical evidence instead of exhaustively parsing the full video. This insight allows the model to use far fewer frames while maintaining, or even improving, its video understanding capability. VideoSeek operates in a think-act-observe loop with a well-designed toolkit for collecting multi-granular video observations. This design enables query-aware exploration over accumulated observations and supports practical video understanding and reasoning. Experiments on four challenging video understanding and reasoning benchmarks demonstrate that VideoSeek achieves strong accuracy while using far fewer frames than prior video agents and standalone LMMs. Notably, VideoSeek achieves a 10.2 absolute points improvement on LVBench over its base model, GPT-5, while using 93% fewer frames. Further analysis highlights the significance of leveraging video logic flow, strong reasoning capability, and the complementary roles of toolkit design.