Ensemble-size-dependence of deep-learning post-processing methods that minimize an (un)fair score: motivating examples and a proof-of-concept solution

2026-02-17Machine Learning

Machine Learning
AI summary

The authors study how to fairly score and improve weather forecasts made by groups of predictions called ensembles. They show that some methods to adjust these ensembles create unwanted dependencies between members, making fairness measures like the adjusted continuous ranked probability score (aCRPS) unreliable. To fix this, they propose a new approach using trajectory transformers that keep members independently distributed over time while improving forecast accuracy and reliability. Their method works well even when using different numbers of ensemble members during training and real-time use.

ensemble forecastingadjusted continuous ranked probability score (aCRPS)forecast calibrationconditional independencetransformersself-attentionpost-processingweather predictionensemble sizeforecast reliability
Authors
Christopher David Roberts
Abstract
Fair scores reward ensemble forecast members that behave like samples from the same distribution as the verifying observations. They are therefore an attractive choice as loss functions to train data-driven ensemble forecasts or post-processing methods when large training ensembles are either unavailable or computationally prohibitive. The adjusted continuous ranked probability score (aCRPS) is fair and unbiased with respect to ensemble size, provided forecast members are exchangeable and interpretable as conditionally independent draws from an underlying predictive distribution. However, distribution-aware post-processing methods that introduce structural dependency between members can violate this assumption, rendering aCRPS unfair. We demonstrate this effect using two approaches designed to minimize the expected aCRPS of a finite ensemble: (1) a linear member-by-member calibration, which couples members through a common dependency on the sample ensemble mean, and (2) a deep-learning method, which couples members via transformer self-attention across the ensemble dimension. In both cases, the results are sensitive to ensemble size and apparent gains in aCRPS can correspond to systematic unreliability characterized by over-dispersion. We introduce trajectory transformers as a proof-of-concept that ensemble-size independence can be achieved. This approach is an adaptation of the Post-processing Ensembles with Transformers (PoET) framework and applies self-attention over lead time while preserving the conditional independence required by aCRPS. When applied to weekly mean $T_{2m}$ forecasts from the ECMWF subseasonal forecasting system, this approach successfully reduces systematic model biases whilst also improving or maintaining forecast reliability regardless of the ensemble size used in training (3 vs 9 members) or real-time forecasts (9 vs 100 members).