AI summaryⓘ
The authors adapted a Transformer-based OCR model called TrOCR, originally made for Latin and CJK scripts, to work with the Ge'ez script used in Tigrinya writing. They expanded the model's tokenizer to include 230 Ge'ez characters and created a new method called Word-Aware Loss Weighting to fix problems when the model tried to find word boundaries using rules designed for Latin scripts. Without these changes, the model couldn't read the Ge'ez text at all, but after adaptation, it became very accurate, with a very low error rate on test images. The authors also showed that the new weighting method was crucial for this improvement, and the training process is efficient and runs on a standard GPU. They made all their code and models available to the public.
TransformerOptical Character Recognition (OCR)TrOCRGe'ez scriptTigrinya languageByte Pair Encoding (BPE)Word-Aware Loss WeightingCharacter Error Rate (CER)Synthetic dataModel adaptation
Abstract
Transformer-based OCR models have shown strong performance on Latin and CJK scripts, but their application to African syllabic writing systems remains limited. We present the first adaptation of TrOCR for printed Tigrinya using the Ge'ez script. Starting from a pre-trained model, we extend the byte-level BPE tokenizer to cover 230 Ge'ez characters and introduce Word-Aware Loss Weighting to resolve systematic word-boundary failures that arise when applying Latin-centric BPE conventions to a new script. The unmodified model produces no usable output on Ge'ez text. After adaptation, the TrOCR-Printed variant achieves 0.22% Character Error Rate and 97.20% exact match accuracy on a held-out test set of 5,000 synthetic images from the GLOCR dataset. An ablation study confirms that Word-Aware Loss Weighting is the critical component, reducing CER by two orders of magnitude compared to vocabulary extension alone. The full pipeline trains in under three hours on a single 8 GB consumer GPU. All code, model weights, and evaluation scripts are publicly released.