Dissecting Quantization Error: A Concentration-Alignment Perspective
2026-03-04 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how to make big language and vision models smaller and faster using quantization without losing too much accuracy. They explain that the quality of quantization depends on how concentrated the model's weights and activations are, and how well their main directions line up. Based on this, they create a new method called Concentration-Alignment Transforms (CAT) that improves both concentration and alignment using a small sample of data. Their tests show that CAT works better or as well as previous methods when using 4-bit quantization.
quantizationlarge language modelspost-training quantizationsignal-to-quantization-noise ratio (SQNR)weight concentrationactivation concentrationalignmentlinear transformationsHadamard transformbit precision
Authors
Marco Federici, Boris van Breugel, Paul Whatmough, Markus Nagel
Abstract
Quantization can drastically increase the efficiency of large language and vision models, but typically incurs an accuracy drop. Recently, function-preserving transforms (e.g. rotations, Hadamard transform, channel-wise scaling) have been successfully applied to reduce post-training quantization error, yet a principled explanation remains elusive. We analyze linear-layer quantization via the signal-to-quantization-noise ratio (SQNR), showing that for uniform integer quantization at a fixed bit width, SQNR decomposes into (i) the concentration of weights and activations (capturing spread and outliers), and (ii) the alignment of their dominant variation directions. This reveals an actionable insight: beyond concentration - the focus of most prior transforms (e.g. rotations or Hadamard) - improving alignment between weight and activation can further reduce quantization error. Motivated by this, we introduce block Concentration-Alignment Transforms (CAT), a lightweight linear transformation that uses a covariance estimate from a small calibration set to jointly improve concentration and alignment, approximately maximizing SQNR. Experiments across several LLMs show that CAT consistently matches or outperforms prior transform-based quantization methods at 4-bit precision, confirming the insights gained in our framework.