AlphaGRPO: Unlocking Self-Reflective Multimodal Generation in UMMs via Decompositional Verifiable Reward

2026-05-12Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summary

The authors present AlphaGRPO, a new method that improves how multimodal models (which handle images and text together) generate content. Their method helps the model understand hidden user intentions and fix its own mistakes without extra setup steps. To guide the model better, they create a reward system that breaks down complicated requests into smaller, checkable questions using large language models. Tests show that AlphaGRPO performs better on several benchmarks and editing tasks, even without specific training for edits. This suggests their approach helps the model use its understanding to create higher-quality outputs.

AlphaGRPOGroup Relative Policy Optimization (GRPO)Multimodal ModelsSelf-Reflective RefinementDecompositional Verifiable Reward (DVReward)Large Language Models (LLM)Text-to-Image GenerationReinforcement LearningModel Editing
Authors
Runhui Huang, Jie Wu, Rui Yang, Zhe Liu, Hengshuang Zhao
Abstract
In this paper, we propose AlphaGRPO, a novel framework that applies Group Relative Policy Optimization (GRPO) to AR-Diffusion Unified Multimodal Models (UMMs) to enhance multimodal generation capabilities without an additional cold-start stage. Our approach unlocks the model's intrinsic potential to perform advanced reasoning tasks: Reasoning Text-to-Image Generation, where the model actively infers implicit user intents, and Self-Reflective Refinement, where it autonomously diagnoses and corrects misalignments in generated outputs. To address the challenge of providing stable supervision for real-world multimodal generation, we introduce the Decompositional Verifiable Reward (DVReward). Unlike holistic scalar rewards, DVReward utilizes an LLM to decompose complex user requests into atomic, verifiable semantic and quality questions, which are then evaluated by a general MLLM to provide reliable and interpretable feedback. Extensive experiments demonstrate that AlphaGRPO yields robust improvements across multimodal generation benchmarks, including GenEval, TIIF-Bench, DPG-Bench and WISE, while also achieving significant gains in editing tasks on GEdit without training on editing tasks. These results validate that our self-reflective reinforcement approach effectively leverages inherent understanding to guide high-fidelity generation. Project page: https://huangrh99.github.io/AlphaGRPO/