Thinking Without Words: Efficient Latent Reasoning with Abstract Chain-of-Thought
2026-04-24 • Computation and Language
Computation and Language
AI summaryⓘ
The authors created a new way for AI language models to think using a short set of special made-up tokens, instead of long written-out explanations. They trained the model in steps, first teaching it to hide parts of normal explanations and then to generate these short tokens by itself. This method lets the model solve problems using much fewer tokens while still working well on math and reasoning tasks. The authors also noticed their special tokens follow patterns like normal language words do. Overall, their approach helps make AI reasoning faster and efficient without losing accuracy.
Chain-of-Thoughtlatent reasoningreinforcement learningself-distillationconstrained decodinglanguage modelstokenizationpost-training optimizationmulti-hop reasoningpower law distribution
Authors
Keshav Ramji, Tahira Naseem, Ramón Fernandez Astudillo
Abstract
While long, explicit chains-of-thought (CoT) have proven effective on complex reasoning tasks, they are costly to generate during inference. Non-verbal reasoning methods have emerged with shorter generation lengths by leveraging continuous representations, yet their performance lags behind verbalized CoT. We propose $\textbf{Abstract Chain-of-Thought}$, a discrete latent reasoning post-training mechanism in which the language model produces a short sequence of tokens from a reserved vocabulary in lieu of a natural language CoT, before generating a response. To make previously unseen ''abstract'' tokens useful, we introduce a policy iteration-style warm-up loop that alternates between (i.) bottlenecking from a verbal CoT via masking and performing supervised fine-tuning, and (ii.) self-distillation by training the model to generate abstract tokens from the prompt alone via constrained decoding with the codebook. After warm-up, we optimize the generation of abstract sequences with warm-started reinforcement learning under constrained decoding. Abstract-CoT achieves up to $11.6\times$ fewer reasoning tokens while demonstrating comparable performance across mathematical reasoning, instruction-following, and multi-hop reasoning, and generalizes across language model families. We also find an emergent power law distribution over the abstract vocabulary, akin to those seen in natural language, that evolves across the training phases. Our findings highlight the potential for post-training latent reasoning mechanisms that enable efficient inference through a learned abstract reasoning language.