Brainstacks: Cross-Domain Cognitive Capabilities via Frozen MoE-LoRA Stacks for Continual LLM Learning

2026-04-01Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors introduce Brainstacks, a system that helps large language models learn multiple domains continuously by stacking small task-specific modules without changing the main model. They use clever techniques like freezing old knowledge to avoid forgetting and selectively combining modules for new tasks. Their tests show faster learning and better quality by mixing these modules smartly. Interestingly, the system often uses general thinking skills across domains instead of just domain-specific facts. This suggests the modules capture useful problem-solving abilities, not just specialized knowledge.

large language modelscontinual learningadapter stacksMoE-LoRAQLoRA quantizationnull-space projectionresidual boostingmeta-routerdomain adaptationinstruction tuning
Authors
Mohammad R. Abu Ayyash
Abstract
We present Brainstacks, a modular architecture for continual multi-domain fine-tuning of large language models that packages domain expertise as frozen adapter stacks composing additively on a shared frozen base at inference. Five interlocking components: (1) MoE-LoRA with Shazeer-style noisy top-2 routing across all seven transformer projections under QLoRA 4-bit quantization with rsLoRA scaling; (2) an inner loop performing residual boosting by freezing trained stacks and adding new ones; (3) an outer loop training sequential domain-specific stacks with curriculum-ordered dependencies; (4) null-space projection via randomized SVD constraining new stacks to subspaces orthogonal to prior directions, achieving zero forgetting in isolation; (5) an outcome-based sigmoid meta-router trained on empirically discovered domain-combination targets that selectively weights stacks, enabling cross-domain composition. Two boundary experiments: (6) PSN pretraining on a randomly initialized model; (7) per-domain RL (DPO/GRPO) validating compatibility with post-SFT alignment. Validated on TinyLlama-1.1B (4 domains, 9 stacks) and Gemma 3 12B IT (5 domains, 10 stacks), MoE-LoRA achieves 2.5x faster convergence than parameter-matched single LoRA, residual boosting breaks through the single-stack ceiling, and the routed system recovers generation quality destroyed by ungated stack accumulation. The central finding: the outcome-based router discovers that domain stacks encode transferable cognitive primitives (instruction-following clarity, numerical reasoning, procedural logic, chain-of-thought structure) rather than domain-specific knowledge, with medical prompts routing to chat+math stacks in 97% of cases despite zero medical data in those stacks.