GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering
2026-03-16 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address the challenge of creating clear and accurate letter shapes (glyphs) for text rendering, especially for complicated or unusual characters. They propose GlyphPrinter, a new method that uses human preferences rather than traditional reward models to improve glyph accuracy. To better focus on small errors in specific parts of letters, they created a dataset called GlyphCorrector and developed a region-based learning technique called Region-Grouped DPO. Their approach leads to more precise glyphs while still allowing for stylish text appearance. Experiments show GlyphPrinter works better than previous methods at balancing accuracy and style.
glyphtext renderingDirect Preference Optimization (DPO)reinforcement learningregion-level preferenceGlyphCorrector datasetRegion-Grouped DPO (R-GDPO)reward modelstylizationvisual text synthesis
Authors
Xincheng Shuai, Ziye Li, Henghui Ding, Dacheng Tao
Abstract
Generating accurate glyphs for visual text rendering is essential yet challenging. Existing methods typically enhance text rendering by training on a large amount of high-quality scene text images, but the limited coverage of glyph variations and excessive stylization often compromise glyph accuracy, especially for complex or out-of-domain characters. Some methods leverage reinforcement learning to alleviate this issue, yet their reward models usually depend on text recognition systems that are insensitive to fine-grained glyph errors, so images with incorrect glyphs may still receive high rewards. Inspired by Direct Preference Optimization (DPO), we propose GlyphPrinter, a preference-based text rendering method that eliminates reliance on explicit reward models. However, the standard DPO objective only models overall preference between two samples, which is insufficient for visual text rendering where glyph errors typically occur in localized regions. To address this issue, we construct the GlyphCorrector dataset with region-level glyph preference annotations and propose Region-Grouped DPO (R-GDPO), a region-based objective that optimizes inter- and intra-sample preferences over annotated regions, substantially enhancing glyph accuracy. Furthermore, we introduce Regional Reward Guidance, an inference strategy that samples from an optimal distribution with controllable glyph accuracy. Extensive experiments demonstrate that the proposed GlyphPrinter outperforms existing methods in glyph accuracy while maintaining a favorable balance between stylization and precision.