Estimating the expected output of wide random MLPs more efficiently than sampling

2026-05-06Machine Learning

Machine Learning
AI summary

The authors show a new way to find the average output of a neural network with random inputs without having to test many random examples one by one. Instead, they use math tools to approximate what happens inside each layer of the network. Their method works well especially for big networks, uses less computing work than the usual sampling approach, and is better at estimating rare but important outcomes. They also show how this approach can help train models to be less prone to rare but serious mistakes.

expected losssamplingMLP (multilayer perceptron)Gaussian inputscumulantsHermite expansionsmean squared errorMonte Carlo samplingrare event probabilitiesmodel training
Authors
Wilson Wu, Victor Lecomte, Michael Winer, George Robinson, Jacob Hilton, Paul Christiano
Abstract
By far the most common way to estimate an expected loss in machine learning is to draw samples, compute the loss on each one, and take the empirical average. However, sampling is not necessarily optimal. Given an MLP at initialization, we show how to estimate its expected output over Gaussian inputs without running samples through the network at all. Instead, we produce approximate representations of the distributions of activations at each layer, leveraging tools such as cumulants and Hermite expansions. We show both theoretically and empirically that for sufficiently wide networks, our estimator achieves a target mean squared error using substantially fewer FLOPs than Monte Carlo sampling. We find moreover that our methods perform particularly well at estimating the probabilities of rare events, and additionally demonstrate how they can be used for model training. Together, these findings suggest a path to producing models with a greatly reduced probability of catastrophic tail risks.