OMIBench: Benchmarking Olympiad-Level Multi-Image Reasoning in Large Vision-Language Model

2026-04-22Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and Language
AI summary

The authors created OMIBench, a new test to see how well large vision-language models (LVLMs) can solve tough problems that need clues from multiple images. Unlike previous tests that focus on just one image, OMIBench includes questions from science and math competitions and checks answers carefully. When the authors tested popular LVLMs like Gemini-3-Pro, they found these models still struggle and score only about 50%. This shows OMIBench helps highlight where these models need to improve in reasoning across several images.

Large Vision-Language ModelsMultimodal ReasoningOlympiad-Level ProblemsBenchmarkingMulti-Image AnalysisBiology OlympiadChemistry OlympiadMathematics OlympiadPhysics OlympiadSemantic Answer Matching
Authors
Qiguang Chen, Chengyu Luan, Jiajun Wu, Qiming Yu, Yi Yang, Yizhuo Li, Jingqi Tong, Xiachong Feng, Libo Qin, Wanxiang Che
Abstract
Large vision-language models (LVLMs) have made substantial advances in reasoning tasks at the Olympiad level. Nevertheless, current Olympiad-level multimodal reasoning benchmarks for these models often emphasize single-image analysis and fail to exploit contextual information across multiple images. We present OMIBench, a benchmark designed to evaluate Olympiad-level reasoning when the required evidence is distributed over multiple images. It contains problems from biology, chemistry, mathematics, and physics Olympiads, together with manually annotated rationales and evaluation protocols for both exact and semantic answer matching. Across extensive experiments on OMIBench, we observe meaningful performance gaps in existing models. Even the strongest LVLMs, such as Gemini-3-Pro, attain only about 50% on the benchmark. These results position OMIBench as a focused resources for studying and improving multi-image reasoning in LVLMs.