Aligning Validation with Deployment: Target-Weighted Cross-Validation for Spatial Prediction

2026-03-31Machine Learning

Machine Learning
AI summary

The authors explain that cross-validation, a method to estimate how well a prediction will perform, often assumes that the test data looks like the deployment data. However, in spatial prediction, this assumption is often wrong, causing biased results. They introduce Target-Weighted CV (TWCV), which adjusts for differences between validation and deployment data by weighting tasks to better match the real-world scenario. They also combine TWCV with a method to create more varied validation tasks, showing in simulations and a pollution mapping example that this approach reduces bias and better estimates prediction performance. Overall, the authors highlight that differences in task distributions are a major source of error in cross-validation for spatial data and propose a solution using calibration weighting and task design.

Cross-validationSpatial predictionDataset shiftCovariate shiftTask difficulty shiftCalibration weightingImportance weightingValidation task generationPredictive risk estimationSpatially buffered resampling
Authors
Alexander Brenning, Thomas Suesse
Abstract
Cross-validation (CV) is commonly used to estimate predictive risk when independent test data are unavailable. Its validity depends on the assumption that validation tasks are sampled from the same distribution as prediction tasks encountered during deployment. In spatial prediction and other settings with structured data, this assumption is frequently violated, leading to biased estimates of deployment risk. We propose Target-Weighted CV (TWCV), an estimator of deployment risk that accounts for discrepancies between validation and deployment task distributions, thus accounting for (1) covariate shift and (2) task-difficulty shift. We characterize prediction tasks by descriptors such as covariates and spatial configuration. TWCV assigns weights to validation losses such that the weighted empirical distribution of validation tasks matches the corresponding distribution over a target domain. The weights are obtained via calibration weighting, yielding an importance-weighted estimator that targets deployment risk. Since TWCV requires adequate coverage of the deployment distribution's support, we combine it with spatially buffered resampling that diversifies the task difficulty distribution. In a simulation study, conventional as well as spatial estimators exhibit substantial bias depending on sampling, whereas buffered TWCV remains approximately unbiased across scenarios. A case study in environmental pollution mapping further confirms that discrepancies between validation and deployment task distributions can affect performance assessment, and that buffered TWCV better reflects the prediction task over the target domain. These results establish task distribution mismatch as a primary source of CV bias in spatial prediction and show that calibration weighting combined with a suitable validation task generator provides a viable approach to estimating predictive risk under dataset shift.