CODA: Difficulty-Aware Compute Allocation for Adaptive Reasoning
2026-03-09 • Computation and Language
Computation and Language
AI summaryⓘ
The authors look at how big reasoning models sometimes spend too much time on easy problems without much benefit. They present CODA, a method that adjusts how much effort the model puts in based on the problem's difficulty. CODA measures difficulty internally and uses that to decide whether to keep thinking more or stop early. This approach saves computing resources on simple tasks while still taking time on harder ones to maintain accuracy.
adaptive reasoninginference-time computeutility maximizationtoken allocationdifficulty estimationrolloutsmodel scalingefficiencyaccuracy-cost tradeoffreasoning depth
Authors
Siye Wu, Jian Xie, Yikai Zhang, Yanghua Xiao
Abstract
The emergence of large reasoning models demonstrates that scaling inference-time compute significantly enhances performance on complex tasks. However, it often falls into another trap: overthinking simple problems, where repetitive rationales yield minimal accuracy gains at a disproportionately high cost. This motivates adaptive reasoning: dynamically aligning reasoning depth with instance difficulty. In this paper, we study adaptive reasoning from an optimality perspective, formalizing it as a utility maximization problem where tokens are allocated until the marginal accuracy gain falls below the incremental cost. Based on this, we propose CODA (Compute Allocation by Difficulty Awareness), a method that operationalizes this principle by allocating tokens via a policy-internal difficulty signal. Specifically, CODA estimates difficulty via group-based rollouts and maps it to two non-negative gates that modulate a length-dependent shaping term on top of the binary base reward. The easy-side gate penalizes verbosity on simple instances, whereas the hard-side gate encourages more deliberative rollouts on challenging ones. Across model scales and benchmarks, CODA achieves adaptive reasoning without external annotations or user-provided budgets: on easy tasks, CODA reduces token costs by over 60% while maintaining strong accuracy, whereas on hard tasks it incentivizes more deliberative rollouts to maximize performance.