SpecKV: Adaptive Speculative Decoding with Compression-Aware Gamma Selection

2026-05-04Machine Learning

Machine LearningArtificial IntelligenceComputation and LanguageDistributed, Parallel, and Cluster Computing
AI summary

The authors study a way to speed up large language models by using a smaller model to guess multiple next words, which the bigger model then checks. They focus on how many words the small model should guess at once, a tricky setting that usually stays fixed but actually should change based on the task and model compression. Their method, SpecKV, smartly picks this number each step by looking at the small model's confidence and uncertainty. This approach leads to over 50% faster predictions with minimal extra computation, and they provide all their data and code for others to use.

speculative decodinglarge language modelsspeculation lengthmodel compressiondraft modelacceptance rateentropyconfidenceadaptive controllerMLP
Authors
Shikhar Shukla
Abstract
Speculative decoding accelerates large language model (LLM) inference by using a small draft model to propose candidate tokens that a larger target model verifies. A critical hyperparameter in this process is the speculation length~$γ$, which determines how many tokens the draft model proposes per step. Nearly all existing systems use a fixed~$γ$ (typically~4), yet empirical evidence suggests that the optimal value varies across task types and, crucially, depends on the compression level applied to the target model. In this paper, we present \textbf{SpecKV}, a lightweight adaptive controller that selects~$γ$ per speculation step using signals extracted from the draft model itself. We profile speculative decoding across 4~task categories, 4~speculation lengths, and 3~compression levels (FP16, INT8, NF4), collecting 5,112 step-level records with per-step acceptance rates, draft entropy, and draft confidence. We demonstrate that the optimal~$γ$ shifts across compression regimes and that draft model confidence and entropy are strong predictors of acceptance rate (correlation~$\approx 0.56$). SpecKV uses a small MLP trained on these signals to maximize expected tokens per speculation step, achieving a 56.0\% improvement over the fixed-$γ$=4 baseline with only 0.34\,ms overhead per decision ($<$0.5\% of step time). The improvement is statistically significant ($p < 0.001$, paired bootstrap test). We release all profiling data, trained models, and notebooks as open-source artifacts.