Understanding In-Context Learning for Nonlinear Regression with Transformers: Attention as Featurizer
2026-05-06 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study how transformer models can learn to do nonlinear regression just by looking at examples in a prompt, without changing their weights, a skill called in-context learning (ICL). Unlike previous work focused on simple linear tasks, they show how transformers can build complex features like polynomials through their attention mechanism. They develop a theoretical framework to explain how well these models can generalize based on the number of examples seen and the length of the prompt. The authors also test their theory with simulations and find it matches their predictions.
transformersin-context learningnonlinear regressionattention mechanismpolynomial featuresspline basesgeneralization errorfinite-sample boundsprompt length
Authors
Alexander Hsu, Zhaiming Shen, Wenjing Liao, Rongjie Lai
Abstract
Pre-trained transformers are able to learn from examples provided as part of the prompt without any weight updates, a remarkable ability known as in-context learning (ICL). Despite its demonstrated efficacy across various domains, the theoretical understanding of ICL is still developing. Whereas most existing theory has focused on linear models, we study ICL in the nonlinear regression setting. Through the interaction mechanism in attention, we explicitly construct transformer networks to realize nonlinear features, such as polynomial or spline bases, which span a wide class of functions. Based on this construction, we establish a framework to analyze end-to-end in-context nonlinear regression with the constructed features. Our theory provides finite-sample generalization error bounds in terms of context length and training set size. We numerically validate the theory on synthetic regression tasks.