LeapAlign: Post-Training Flow Matching Models at Any Generation Step by Building Two-Step Trajectories

2026-04-16Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed LeapAlign, a new method to improve flow matching models by fine-tuning them to better match human preferences. Traditional fine-tuning methods are slow and memory-heavy because they need to process long sequences of steps, which also makes training unstable. LeapAlign cleverly shortens these sequences into just two big jumps, making the process faster and more stable. This allows the model to update early generation steps, which are important for the overall image structure. Their experiments show LeapAlign performs better than previous methods in producing high-quality and well-aligned images.

flow matchingfine-tuningbackpropagationreward gradientstrajectoryODE samplinglatent predictiongradient stabilityimage generationimage-text alignment
Authors
Zhanhao Liang, Tao Yang, Jie Wu, Chengjian Feng, Liang Zheng
Abstract
This paper focuses on the alignment of flow matching models with human preferences. A promising way is fine-tuning by directly backpropagating reward gradients through the differentiable generation process of flow matching. However, backpropagating through long trajectories results in prohibitive memory costs and gradient explosion. Therefore, direct-gradient methods struggle to update early generation steps, which are crucial for determining the global structure of the final image. To address this issue, we introduce LeapAlign, a fine-tuning method that reduces computational cost and enables direct gradient propagation from reward to early generation steps. Specifically, we shorten the long trajectory into only two steps by designing two consecutive leaps, each skipping multiple ODE sampling steps and predicting future latents in a single step. By randomizing the start and end timesteps of the leaps, LeapAlign leads to efficient and stable model updates at any generation step. To better use such shortened trajectories, we assign higher training weights to those that are more consistent with the long generation path. To further enhance gradient stability, we reduce the weights of gradient terms with large magnitude, instead of completely removing them as done in previous works. When fine-tuning the Flux model, LeapAlign consistently outperforms state-of-the-art GRPO-based and direct-gradient methods across various metrics, achieving superior image quality and image-text alignment.