PerceptionComp: A Video Benchmark for Complex Perception-Centric Reasoning
2026-03-27 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and LanguageMachine Learning
AI summaryⓘ
The authors created PerceptionComp, a new video reasoning test that requires looking at long videos and combining multiple visual clues over time to answer complex questions. Each question demands understanding objects, actions, and their relationships across different moments, making it much harder than previous tests. Human participants found it challenging and needed to watch videos multiple times to answer correctly. Current AI models also struggle with this benchmark, showing that long-term video understanding is still a difficult problem. The authors hope PerceptionComp will help improve AI's ability to reason about complicated visual stories.
video reasoningperceptiontemporal reasoningspatial reasoningsemantic recognitionvisual correspondencebenchmark datasetcompositional logicmachine learningmultimodal language models
Authors
Shaoxuan Li, Zhixuan Zhao, Hanze Deng, Zirun Ma, Shulin Tian, Zuyan Liu, Yushi Hu, Haoning Wu, Yuhao Dong, Benlin Liu, Ziwei Liu, Ranjay Krishna
Abstract
We introduce PerceptionComp, a manually annotated benchmark for complex, long-horizon, perception-centric video reasoning. PerceptionComp is designed so that no single moment is sufficient: answering each question requires multiple temporally separated pieces of visual evidence and compositional constraints under conjunctive and sequential logic, spanning perceptual subtasks such as objects, attributes, relations, locations, actions, and events, and requiring skills including semantic recognition, visual correspondence, temporal reasoning, and spatial reasoning. The benchmark contains 1,114 highly complex questions on 279 videos from diverse domains including city walk tours, indoor villa tours, video games, and extreme outdoor sports, with 100% manual annotation. Human studies show that PerceptionComp requires substantial test-time thinking and repeated perception steps: participants take much longer than on prior benchmarks, and accuracy drops to near chance (18.97%) when rewatching is disallowed. State-of-the-art MLLMs also perform substantially worse on PerceptionComp than on existing benchmarks: the best model in our evaluation, Gemini-3-Flash, reaches only 45.96% accuracy in the five-choice setting, while open-source models remain below 40%. These results suggest that perception-centric long-horizon video reasoning remains a major bottleneck, and we hope PerceptionComp will help drive progress in perceptual reasoning.