Evolving Knowledge Distillation for Lightweight Neural Machine Translation
2026-05-11 • Computation and Language
Computation and Language
AI summaryⓘ
The authors focus on making big, smart translation models smaller so they can work on devices with less power, like phones. They found that simply copying knowledge from a large model to a small one doesn't work well when the size difference is big. To fix this, they created a method called Evolving Knowledge Distillation (EKD) where the small model learns step-by-step from a series of teachers, each a bit bigger than the last. Their tests show this method helps the small model translate almost as well as the biggest teacher models. This approach helps smaller models get smarter without needing as much power.
Neural Machine TranslationKnowledge DistillationModel CompressionTeacher-Student ModelsBLEU ScoreModel CapacityProgressive TrainingIWSLT-14WMT-17WMT-23
Authors
Xuewen Zhang, Haixiao Zhang, Xinlong Huang
Abstract
Recent advancements in Neural Machine Translation (NMT) have significantly improved translation quality. However, the increasing size and complexity of state-of-the-art models present significant challenges for deployment on resource-limited devices. Knowledge distillation (KD) is a promising approach for compressing models, but its effectiveness diminishes when there is a large capacity gap between teacher and student models. To address this issue, we propose Evolving Knowledge Distillation (EKD), a progressive training framework in which the student model learns from a sequence of teachers with gradually increasing capacities. Experiments on IWSLT-14, WMT-17, and WMT-23 benchmarks show that EKD leads to consistent improvements at each stage. On IWSLT-14, the final student achieves a BLEU score of 34.24, narrowing the gap to the strongest teacher (34.32 BLEU) to just 0.08 BLEU. Similar trends are observed on other datasets. These results demonstrate that EKD effectively bridges the capacity gap, enabling compact models to achieve performance close to that of much larger teacher models.Code and models are available at https://github.com/agi-content-generation/EKD.