From Where Words Come: Efficient Regularization of Code Tokenizers Through Source Attribution

2026-04-15Computation and Language

Computation and Language
AI summary

The authors studied how tokenizing code for Large Language Models can be inefficient when training data is uneven, causing some tokens to be rarely used and not well learned. They found that common, repetitive tokens from dominant sources can crowd out more useful tokens. To fix this, they changed the tokenization process with a method called Source-Attributed BPE (SA-BPE), which reduces overfitting and unused tokens without slowing down the model. This method helps make tokenization more efficient and safer for real-world use.

Large Language Modelstokenizationbyte pair encodingBPEcode tokenizationoverfittinginferencejailbreak attacksdata diversitymerge skipping
Authors
Pavel Chizhov, Egor Bogomolov, Ivan P. Yamshchikov
Abstract
Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves inference speed and language understanding but also provides extra defense against jailbreak attacks and lowers the risk of hallucinations. In this work, we investigate the efficiency of code tokenization, in particular from the perspective of data source diversity. We demonstrate that code tokenizers are prone to producing unused, and thus under-trained, tokens due to the imbalance in repository and language diversity in the training data, as well as the dominance of source-specific, repetitive tokens that are often unusable in future inference. By modifying the BPE objective and introducing merge skipping, we implement different techniques under the name Source-Attributed BPE (SA-BPE) to regularize BPE training and minimize overfitting, thereby substantially reducing the number of under-trained tokens while maintaining the same inference procedure as with regular BPE. This provides an effective tool suitable for production use.