VOSR: A Vision-Only Generative Model for Image Super-Resolution

2026-04-03Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors explore if an image super-resolution (SR) method can work well using only visual data, instead of relying on huge text-to-image models. They create VOSR, which uses features extracted from low-resolution images as guidance and adjusts the training method to better suit image restoration. Their approach trains faster and makes clearer, more accurate high-resolution images with fewer mistakes compared to text-based methods. This shows that good SR can be done without needing large text-image datasets.

super-resolutiondiffusion modelstext-to-image modelsvision encoderclassifier-free guidanceimage restorationgenerative modelslow-resolution imagesmodel distillationperceptual quality
Authors
Rongyuan Wu, Lingchen Sun, Zhengqiang Zhang, Xiangtao Kong, Jixin Zhao, Shihao Wang, Lei Zhang
Abstract
Most of the recent generative image super-resolution (SR) methods rely on adapting large text-to-image (T2I) diffusion models pretrained on web-scale text-image data. While effective, this paradigm starts from a generic T2I generator, despite that SR is fundamentally a low-resolution (LR) input-conditioned image restoration task. In this work, we investigate whether an SR model trained purely on visual data can rival T2I-based ones. To this end, we propose VOSR, a Vision-Only generative framework for SR. We first extract semantically rich and spatially grounded features from the LR input using a pretrained vision encoder as visual semantic guidance. We then revisit classifier-free guidance for training generative models and show that the standard unconditional branch is ill-suited to restoration models trained from scratch. We therefore replace it with a restoration-oriented guidance strategy that preserves weak LR anchors. Built upon these designs, we first train a multi-step VOSR model from scratch and then distill it into a one-step model for efficient inference. VOSR requires less than one-tenth of the training cost of representative T2I-based SR methods, yet in both multi-step and one-step settings, it achieves competitive or even better perceptual quality and efficiency, while producing more faithful structures with fewer hallucinations on both synthetic and real-world benchmarks. Our results, for the first time, show that high-quality generative SR can be achieved without multimodal pretraining. The code and models can be found at https://github.com/cswry/VOSR.