MM-CondChain: A Programmatically Verified Benchmark for Visually Grounded Deep Compositional Reasoning

2026-03-12Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created a new test called MM-CondChain to check how well multimodal large language models (MLLMs) can understand complex visual instructions that depend on multiple detailed conditions. Each test is like a chain of steps, where each step needs careful looking at different parts of an image and making decisions based on what is seen. They built a system to generate these complicated tests automatically and made versions for photos, charts, and computer interfaces. Their experiments showed that even the best models struggle with these deep, step-by-step reasoning tasks, especially when the conditions get harder or more complex.

Multimodal Large Language ModelsVisual ReasoningCompositional ConditionalsBenchmarkGUI NavigationVisual WorkflowsAgentic Synthesis PipelineProgrammatic VerificationPath F1Deep Compositional Reasoning
Authors
Haozhan Shen, Shilin Yan, Hongwei Xue, Shuaiqi Lu, Xiaojun Tang, Guannan Zhang, Tiancheng Zhao, Jianwei Yin
Abstract
Multimodal Large Language Models (MLLMs) are increasingly used to carry out visual workflows such as navigating GUIs, where the next step depends on verified visual compositional conditions (e.g., "if a permission dialog appears and the color of the interface is green, click Allow") and the process may branch or terminate early. Yet this capability remains under-evaluated: existing benchmarks focus on shallow-compositions or independent-constraints rather than deeply chained compositional conditionals. In this paper, we introduce MM-CondChain, a benchmark for visually grounded deep compositional reasoning. Each benchmark instance is organized as a multi-layer reasoning chain, where every layer contains a non-trivial compositional condition grounded in visual evidence and built from multiple objects, attributes, or relations. To answer correctly, an MLLM must perceive the image in detail, reason over multiple visual elements at each step, and follow the resulting execution path to the final outcome. To scalably construct such workflow-style data, we propose an agentic synthesis pipeline: a Planner orchestrates layer-by-layer generation of compositional conditions, while a Verifiable Programmatic Intermediate Representation (VPIR) ensures each layer's condition is mechanically verifiable. A Composer then assembles these verified layers into complete instructions. Using this pipeline, we construct benchmarks across three visual domains: natural images, data charts, and GUI trajectories. Experiments on a range of MLLMs show that even the strongest model attains only 53.33 Path F1, with sharp drops on hard negatives and as depth or predicate complexity grows, confirming that deep compositional reasoning remains a fundamental challenge.