Provable Quantization with Randomized Hadamard Transform

2026-05-13Machine Learning

Machine LearningData Structures and Algorithms
AI summary

The authors study a method for compressing data by first transforming it using a special kind of fast math operation called the randomized Hadamard transform, then quantizing it with a small random shift added. This method is faster than using fully random rotations but is more challenging to analyze. They prove that adding this small random offset makes the method unbiased and keeps the error low, matching the accuracy of slower, fully random methods as the amount of quantization increases. Their results show the method’s error behaves very predictably across all data directions and dimensions.

vector quantizationrandom projectionrandomized Hadamard transformscalar quantizationmean squared errordithered quantizationTurboQuantcompression guaranteesrandom rotationsunit vectors
Authors
Ying Feng, Piotr Indyk, Michael Kapralov, Dmitry Krachun, Boris Prokhorov
Abstract
Vector quantization via random projection followed by scalar quantization is a fundamental primitive in machine learning, with applications ranging from similarity search to federated learning and KV cache compression. While dense random rotations yield clean theoretical guarantees, they require $Θ(d^2)$ time. The randomized Hadamard transform $HD$ reduces this cost to $O(d \log d)$, but its discrete structure complicates analysis and leads to weaker or purely empirical compression guarantees. In this work, we study a variant of this approach: dithered quantization with a single randomized Hadamard transform. Specifically, the quantizer applies $HD$ to the input vector and subtracts a random scalar offset before quantizing, injecting additional randomness at negligible cost. We prove that this approach is unbiased and provides mean squared error bounds that asymptotically match those achievable with truly random rotation matrices. In particular, we prove that a dithered version of TurboQuant achieves mean squared error $\bigl(π\sqrt{3}/2 + o(1)\bigr) \cdot 4^{-b}$ at $b$ bits per coordinate, where the $o(1)$ term vanishes uniformly over all unit vectors and all dimensions as the number of quantization levels grows.