Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction
2026-02-26 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors explain that AI models make predictions based on given inputs but can still make mistakes due to different types of uncertainty. They noticed that when the same input is changed in simple ways and run through the model multiple times, the mistakes are somewhat independent. Using this idea, the authors suggest running the model on many such changed inputs and combining the results to get better predictions. This method can improve accuracy without needing to make the AI model itself bigger.
AI modelinferencealeatoric uncertaintyepistemic uncertaintyinvariant transformationresamplingaggregationmodel accuracyhigh-dimensional space
Authors
Sha Hu
Abstract
An artificial intelligence (AI) model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a "resampling" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance.