Speculative Speculative Decoding

2026-03-03Machine Learning

Machine Learning
AI summary

The authors explain that generating text one word at a time (autoregressive decoding) is slow because it happens in order. To speed this up, people use a faster model to guess the next words and then check those guesses with a slower, more accurate model (speculative decoding). The authors created a new method called speculative speculative decoding (SSD) that predicts and checks guesses in parallel, making the process faster. Their SSD method, called Saguaro, runs up to twice as fast as previous sped-up methods and up to five times faster than the usual word-by-word way.

autoregressive decodingspeculative decodingparallel processinginference accelerationdraft modeltarget modelverificationSaguarotext generation
Authors
Tanishq Kumar, Tri Dao, Avner May
Abstract
Autoregressive decoding is bottlenecked by its sequential nature. Speculative decoding has become a standard way to accelerate inference by using a fast draft model to predict upcoming tokens from a slower target model, and then verifying them in parallel with a single target model forward pass. However, speculative decoding itself relies on a sequential dependence between speculation and verification. We introduce speculative speculative decoding (SSD) to parallelize these operations. While a verification is ongoing, the draft model predicts likely verification outcomes and prepares speculations pre-emptively for them. If the actual verification outcome is then in the predicted set, a speculation can be returned immediately, eliminating drafting overhead entirely. We identify three key challenges presented by speculative speculative decoding, and suggest principled methods to solve each. The result is Saguaro, an optimized SSD algorithm. Our implementation is up to 2x faster than optimized speculative decoding baselines and up to 5x faster than autoregressive decoding with open source inference engines.