Operationalising the Superficial Alignment Hypothesis via Task Complexity
2026-02-17 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study a hypothesis called the superficial alignment hypothesis (SAH), which suggests that language models mostly learn knowledge during pre-training, and later training mainly helps reveal that knowledge. They define a new way to measure problem difficulty called task complexity, based on the smallest program needed to solve it. Their work shows that pre-trained models make many tasks much simpler and that fine-tuning the model after pre-training requires surprisingly little additional information. This helps clarify and unify previous ideas about how these models learn and improve.
Superficial Alignment HypothesisPre-trainingPost-trainingTask ComplexityProgram LengthLarge Language ModelsMathematical ReasoningMachine TranslationInstruction Following
Authors
Tomás Vergara-Browne, Darshan Patil, Ivan Titov, Siva Reddy, Tiago Pimentel, Marius Mosbach
Abstract
The superficial alignment hypothesis (SAH) posits that large language models learn most of their knowledge during pre-training, and that post-training merely surfaces this knowledge. The SAH, however, lacks a precise definition, which has led to (i) different and seemingly orthogonal arguments supporting it, and (ii) important critiques to it. We propose a new metric called task complexity: the length of the shortest program that achieves a target performance on a task. In this framework, the SAH simply claims that pre-trained models drastically reduce the complexity of achieving high performance on many tasks. Our definition unifies prior arguments supporting the SAH, interpreting them as different strategies to find such short programs. Experimentally, we estimate the task complexity of mathematical reasoning, machine translation, and instruction following; we then show that these complexities can be remarkably low when conditioned on a pre-trained model. Further, we find that pre-training enables access to strong performances on our tasks, but it can require programs of gigabytes of length to access them. Post-training, on the other hand, collapses the complexity of reaching this same performance by several orders of magnitude. Overall, our results highlight that task adaptation often requires surprisingly little information -- often just a few kilobytes.