A Scale-Adaptive Framework for Joint Spatiotemporal Super-Resolution with Diffusion Models
2026-04-23 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors propose a new method for improving video resolution in both space and time together, rather than separately. Their approach uses one model that can be adapted to different resolution increases by adjusting just a few settings, instead of building a new model for each case. They also include a way to keep important totals, like precipitation amounts, consistent after increasing resolution. Tested on rainfall data over France, their method works well across a wide range of resolution increases, making it flexible and reusable for climate-related video enhancement.
video super-resolutionspatiotemporal modelingdeep learningdiffusion modelconditional mean predictionattention mechanismmass conservationclimate datareanalysis precipitationhyperparameter tuning
Authors
Max Defez, Filippo Quarenghi, Mathieu Vrac, Stephan Mandt, Tom Beucler
Abstract
Deep-learning video super-resolution has progressed rapidly, but climate applications typically super-resolve (increase resolution) either space or time, and joint spatiotemporal models are often designed for a single pair of super-resolution (SR) factors (upscaling spatial and temporal ratio between the low-resolution sequence and the high-resolution sequence), limiting transfer across spatial resolutions and temporal cadences (frame rates). We present a scale-adaptive framework that reuses the same architecture across factors by decomposing spatiotemporal SR into a deterministic prediction of the conditional mean, with attention, and a residual conditional diffusion model, with an optional mass-conservation (same precipitation amount in inputs and outputs) transform to preserve aggregated totals. Assuming that larger SR factors primarily increase underdetermination (hence required context and residual uncertainty) rather than changing the conditional-mean structure, scale adaptivity is achieved by retuning three factor-dependent hyperparameters before retraining: the diffusion noise schedule amplitude beta (larger for larger factors to increase diversity), the temporal context length L (set to maintain comparable attention horizons across cadences) and optionally a third, the mass-conservation function f (tapered to limit the amplification of extremes for large factors). Demonstrated on reanalysis precipitation over France (Comephore), the same architecture spans super-resolution factors from 1 to 25 in space and 1 to 6 in time, yielding a reusable architecture and tuning recipe for joint spatiotemporal super-resolution across scales.