Statistical Query Lower Bounds for Smoothed Agnostic Learning
2026-02-24 • Machine Learning
Machine LearningData Structures and Algorithms
AI summaryⓘ
The authors study how hard it is to learn the best simple rule (halfspace) to classify noisy data when inputs are slightly blurred by Gaussian noise. They focus on a type of learning called smoothed agnostic learning and show that the known best algorithm's complexity is close to the theoretical lower limit for a broad class of algorithms (Statistical Query algorithms). They prove this by relating the problem to how well low-degree polynomials can approximate the blurred function, using advanced math tools like linear programming. This is the first meaningful proof showing that improving on the known algorithm substantially is unlikely.
Smoothed agnostic learningHalfspacesGaussian perturbationsSubgaussian distributionsStatistical Query modelL1-polynomial regressionApproximation degreeLinear programming dualityMoment-matching distributions
Authors
Ilias Diakonikolas, Daniel M. Kane
Abstract
We study the complexity of smoothed agnostic learning, recently introduced by~\cite{CKKMS24}, in which the learner competes with the best classifier in a target class under slight Gaussian perturbations of the inputs. Specifically, we focus on the prototypical task of agnostically learning halfspaces under subgaussian distributions in the smoothed model. The best known upper bound for this problem relies on $L_1$-polynomial regression and has complexity $d^{\tilde{O}(1/σ^2) \log(1/ε)}$, where $σ$ is the smoothing parameter and $ε$ is the excess error. Our main result is a Statistical Query (SQ) lower bound providing formal evidence that this upper bound is close to best possible. In more detail, we show that (even for Gaussian marginals) any SQ algorithm for smoothed agnostic learning of halfspaces requires complexity $d^{Ω(1/σ^{2}+\log(1/ε))}$. This is the first non-trivial lower bound on the complexity of this task and nearly matches the known upper bound. Roughly speaking, we show that applying $L_1$-polynomial regression to a smoothed version of the function is essentially best possible. Our techniques involve finding a moment-matching hard distribution by way of linear programming duality. This dual program corresponds exactly to finding a low-degree approximating polynomial to the smoothed version of the target function (which turns out to be the same condition required for the $L_1$-polynomial regression to work). Our explicit SQ lower bound then comes from proving lower bounds on this approximation degree for the class of halfspaces.