Flow-OPD: On-Policy Distillation for Flow Matching Models

2026-05-08Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors identify problems with existing Flow Matching models in text-to-image tasks, such as poor rewards and conflicting goals during training. To fix this, they propose Flow-OPD, which trains specialized teacher models first and then combines their strengths into one student model through a careful multi-step process. They also add a technique called Manifold Anchor Regularization to keep images looking good and avoid common quality drops. Testing on Stable Diffusion 3.5 shows notable improvements in overall quality and text recognition accuracy, surpassing previous methods while maintaining image fidelity. The authors present Flow-OPD as a promising way to improve text-to-image models across multiple tasks.

Flow MatchingOn-Policy DistillationText-to-Image GenerationReward SparsityGradient InterferenceStable DiffusionManifold Anchor RegularizationReinforcement LearningMulti-task AlignmentGenerative Models
Authors
Zhen Fang, Wenxuan Huang, Yu Zeng, Yiming Zhao, Shuang Chen, Kaituo Feng, Yunlong Lin, Lin Chen, Zehui Chen, Shaosheng Cao, Feng Zhao
Abstract
Existing Flow Matching (FM) text-to-image models suffer from two critical bottlenecks under multi-task alignment: the reward sparsity induced by scalar-valued rewards, and the gradient interference arising from jointly optimizing heterogeneous objectives, which together give rise to a 'seesaw effect' of competing metrics and pervasive reward hacking. Inspired by the success of On-Policy Distillation (OPD) in the large language model community, we propose Flow-OPD, the first unified post-training framework that integrates on-policy distillation into Flow Matching models. Flow-OPD adopts a two-stage alignment strategy: it first cultivates domain-specialized teacher models via single-reward GRPO fine-tuning, allowing each expert to reach its performance ceiling in isolation; it then establishes a robust initial policy through a Flow-based Cold-Start scheme and seamlessly consolidates heterogeneous expertise into a single student via a three-step orchestration of on-policy sampling, task-routing labeling, and dense trajectory-level supervision. We further introduce Manifold Anchor Regularization (MAR), which leverages a task-agnostic teacher to provide full-data supervision that anchors generation to a high-quality manifold, effectively mitigating the aesthetic degradation commonly observed in purely RL-driven alignment. Built upon Stable Diffusion 3.5 Medium, Flow-OPD raises the GenEval score from 63 to 92 and the OCR accuracy from 59 to 94, yielding an overall improvement of roughly 10 points over vanilla GRPO, while preserving image fidelity and human-preference alignment and exhibiting an emergent 'teacher-surpassing' effect. These results establish Flow-OPD as a scalable alignment paradigm for building generalist text-to-image models.