VEFX-Bench: A Holistic Benchmark for Generic Video Editing and Visual Effects

2026-04-17Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and Language
AI summary

The authors created a large dataset called VEFX-Dataset with over 5,000 examples of video edits, each labeled by humans on how well the edits follow instructions, look good, and keep changes local. They also developed VEFX-Reward, a model that scores video edits by analyzing the original video, the instructions, and the edited video, aiming to match human judgments better than existing methods. To help compare editing systems fairly, they released VEFX-Bench, a set of 300 video and instruction pairs. Their tests show current video editing tools still struggle to balance looking realistic, following instructions accurately, and keeping edits focused where needed.

video editinginstruction followinghuman-annotated datasetevaluation metricsreward modelordinal regressionvision-language modelsvideo quality assessment
Authors
Xiangbo Gao, Sicong Jiang, Bangya Liu, Xinghao Chen, Minglai Yang, Siyuan Yang, Mingyang Wu, Jiongze Yu, Qi Zheng, Haozhi Wang, Jiayi Zhang, Jared Yang, Jie Yang, Zihan Wang, Qing Yin, Zhengzhong Tu
Abstract
As AI-assisted video creation becomes increasingly practical, instruction-guided video editing has become essential for refining generated or captured footage to meet professional requirements. Yet the field still lacks both a large-scale human-annotated dataset with complete editing examples and a standardized evaluator for comparing editing systems. Existing resources are limited by small scale, missing edited outputs, or the absence of human quality labels, while current evaluation often relies on expensive manual inspection or generic vision-language model judges that are not specialized for editing quality. We introduce VEFX-Dataset, a human-annotated dataset containing 5,049 video editing examples across 9 major editing categories and 32 subcategories, each labeled along three decoupled dimensions: Instruction Following, Rendering Quality, and Edit Exclusivity. Building on VEFX-Dataset, we propose VEFX-Reward, a reward model designed specifically for video editing quality assessment. VEFX-Reward jointly processes the source video, the editing instruction, and the edited video, and predicts per-dimension quality scores via ordinal regression. We further release VEFX-Bench, a benchmark of 300 curated video-prompt pairs for standardized comparison of editing systems. Experiments show that VEFX-Reward aligns more strongly with human judgments than generic VLM judges and prior reward models on both standard IQA/VQA metrics and group-wise preference evaluation. Using VEFX-Reward as an evaluator, we benchmark representative commercial and open-source video editing systems, revealing a persistent gap between visual plausibility, instruction following, and edit locality in current models.