EMO: Pretraining Mixture of Experts for Emergent Modularity

2026-05-07Computation and Language

Computation and Language
AI summary

The authors studied a special type of language model called a Mixture-of-Experts (MoE) that usually needs to use all its parts at once, making it hard to save memory. They created EMO, a new MoE that groups similar types of information together so only a relevant subset of experts is needed for each document. This approach lets the model work almost as well with fewer experts, saving memory without losing much accuracy. They found EMO naturally organizes experts by meaning, like math or code, unlike standard models that focus on basic language features.

Large Language ModelsMixture-of-Experts (MoE)ModularityExpert SpecializationSparse ModelsMemory EfficiencyPretrainingDomain AdaptationToken Selection
Authors
Ryan Wang, Akshita Bhagia, Sewon Min
Abstract
Large language models are typically deployed as monolithic systems, requiring the full model even when applications need only a narrow subset of capabilities, e.g., code, math, or domain-specific knowledge. Mixture-of-Experts (MoEs) seemingly offer a potential alternative by activating only a subset of experts per input, but in practice, restricting inference to a subset of experts for a given domain leads to severe performance degradation. This limits their practicality in memory-constrained settings, especially as models grow larger and sparser. We introduce EMO, an MoE designed for modularity-the independent use and composition of expert subsets-without requiring human-defined priors. Our key idea is to encourage tokens from similar domains to rely on similar experts. Since tokens within a document often share a domain, EMO restricts them to select experts from a shared pool, while allowing different documents to use different pools. This simple constraint enables coherent expert groupings to emerge during pretraining using document boundaries alone. We pretrain a 1B-active, 14B-total EMO on 1T tokens. As a full model, it matches standard MoE performance. Crucially, it enables selective expert use: retaining only 25% (12.5%) of experts incurs just a 1% (3%) absolute drop, whereas standard MoEs break under the same setting. We further find that expert subsets in EMO specialize at semantic levels (e.g., domains such as math or code), in contrast to the low-level syntactic specialization observed in standard MoEs. Altogether, our results demonstrate a path toward modular, memory-efficient deployment of large, sparse models and open new opportunities for composable architectures.