GRADE: Benchmarking Discipline-Informed Reasoning in Image Editing

2026-03-12Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created GRADE, a new test to see how well computer models can edit images that need specific knowledge from different academic fields. Unlike earlier tests that mostly use everyday pictures and simple reasoning, GRADE uses images from 10 different academic areas and checks if the edits make sense in terms of the field, look right, and are logically understandable. They tested 20 popular models and found that these models struggle a lot when the editing requires deep, expert knowledge. Their work helps highlight where current tools fall short and guides future improvements in image editing using knowledge from different disciplines.

Unified multimodal modelsImage editingDiscipline-informed knowledgeReasoning evaluationVisual consistencyLogical readabilityCommonsense reasoningBenchmark datasetOpen-source modelsDomain-specific constraints
Authors
Mingxin Liu, Ziqian Fan, Zhaokai Wang, Leyao Gu, Zirun Zhu, Yiguo He, Yuchen Yang, Changyao Tian, Xiangyu Zhao, Ning Liao, Shaofeng Zhang, Qibing Ren, Zhihang Zhong, Xuanhe Zhou, Junchi Yan, Xue Yang
Abstract
Unified multimodal models target joint understanding, reasoning, and generation, but current image editing benchmarks are largely confined to natural images and shallow commonsense reasoning, offering limited assessment of this capability under structured, domain-specific constraints. In this work, we introduce GRADE, the first benchmark to assess discipline-informed knowledge and reasoning in image editing. GRADE comprises 520 carefully curated samples across 10 academic domains, spanning from natural science to social science. To support rigorous evaluation, we propose a multi-dimensional evaluation protocol that jointly assesses Discipline Reasoning, Visual Consistency, and Logical Readability. Extensive experiments on 20 state-of-the-art open-source and closed-source models reveal substantial limitations in current models under implicit, knowledge-intensive editing settings, leading to large performance gaps. Beyond quantitative scores, we conduct rigorous analyses and ablations to expose model shortcomings and identify the constraints within disciplinary editing. Together, GRADE pinpoints key directions for the future development of unified multimodal models, advancing the research on discipline-informed image editing and reasoning. Our benchmark and evaluation code are publicly released.