From Plans to Pixels: Learning to Plan and Orchestrate for Open-Ended Image Editing

2026-05-14Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

AI summary unavailable.

Authors
Anirudh Sundara Rajan, Krishna Kumar Singh, Yong Jae Lee
Abstract
Modern image editing models produce realistic results but struggle with abstract, multi step instructions (e.g., ``make this advertisement more vegetarian-friendly''). Prior agent based methods decompose such tasks but rely on handcrafted pipelines or teacher imitation, limiting flexibility and decoupling learning from actual editing outcomes. We propose an experiential framework for long-horizon image editing, where a planner generates structured atomic decompositions and an orchestrator selects tools and regions to execute each step. A vision language judge provides outcome-based rewards for instruction adherence and visual quality. The orchestrator is trained to maximize these rewards, and successful trajectories are used to refine the planner. By tightly coupling planning with reward driven execution, our approach yields more coherent and reliable edits than single-step or rule-based multistep baselines.