Divide and Predict: An Architecture for Input Space Partitioning and Enhanced Accuracy
2026-03-09 • Machine Learning
Machine Learning
AI summaryⓘ
The authors create a way to measure how mixed or different the training data is by looking at the variance that comes from pairs of data points. They show that this variance helps identify if the data comes from several different groups. Their tests with image and synthetic data found that the variance is highest when the data is an even mix of groups. They also show that cleaning the data based on this variance can improve the results when training models.
heterogeneitysupervised learningvariancedata distributionEMNISTrandom variabledata partitioningdata purificationtest accuracy
Authors
Fenix W. Huang, Henning S. Mortveit, Christian M. Reidys
Abstract
In this article the authors develop an intrinsic measure for quantifying heterogeneity in training data for supervised learning. This measure is the variance of a random variable which factors through the influences of pairs of training points. The variance is shown to capture data heterogeneity and can thus be used to assess if a sample is a mixture of distributions. The authors prove that the data itself contains key information that supports a partitioning into blocks. Several proof of concept studies are provided that quantify the connection between variance and heterogeneity for EMNIST image data and synthetic data. The authors establish that variance is maximal for equal mixes of distributions, and detail how variance-based data purification followed by conventional training over blocks can lead to significant increases in test accuracy.