UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual Generation

2026-03-24Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors propose a new method to improve systems that generate text and images together in a back-and-forth way. They treat the process like a game where the model first thinks through the text prompt and then creates an image, using reinforcement learning to improve both steps simultaneously. To handle more complex back-and-forth interactions, they make specific technical changes to the image generation part, which help keep the process efficient and stable. Their experiments show that this unified approach improves the quality of images generated from reasoning-based text prompts and is a solid foundation for future models that combine text and images.

interleaved generationautoregressive modelingreinforcement learningMarkov Decision ProcessGRPOFlowGRPOclassifier-free guidancelatent KL penaltyvelocity fieldsreward hacking
Authors
Jie Liu, Zilyu Ye, Linxiao Yuan, Shenhan Zhu, Yu Gao, Jie Wu, Kunchang Li, Xionghui Wang, Xiaonan Nie, Weilin Huang, Wanli Ouyang
Abstract
Unified models capable of interleaved generation have emerged as a promising paradigm, with the community increasingly converging on autoregressive modeling for text and flow matching for image generation. To advance this direction, we propose a unified reinforcement learning framework tailored for interleaved generation. We validate our approach on its fundamental unit: a single round of reasoning-driven image generation, where the model first expands the user prompt through reasoning, followed by image synthesis. Formulating this multimodal generation process as a Markov Decision Process with sparse terminal rewards, we introduce UniGRPO to jointly optimize text and image generation policies using GRPO. Adopting a minimalist methodology to avoid over-design, we leverage established training recipes for both modalities by seamlessly integrating standard GRPO for reasoning and FlowGRPO for visual synthesis. To ensure scalability to multi-round interleaved generation, we introduce two critical modifications to the original FlowGRPO: (1) eliminating classifier-free guidance to maintain linear, unbranched rollouts, which is essential for scaling to complex scenarios involving multi-turn interactions and multi-condition generation (e.g., editing); and (2) replacing the standard latent KL penalty with an MSE penalty directly on the velocity fields, providing a more robust and direct regularization signal to mitigate reward hacking effectively. Our experiments demonstrate that this unified training recipe significantly enhances image generation quality through reasoning, providing a robust and scalable baseline for the future post-training of fully interleaved models.