GlotOCR Bench: OCR Models Still Struggle Beyond a Handful of Unicode Scripts
2026-04-14 • Computation and Language
Computation and LanguageComputer Vision and Pattern Recognition
AI summaryⓘ
The authors created GlotOCR Bench, a new test to see how well optical character recognition (OCR) systems read text from over 100 different writing scripts, including less common ones. They made images of real text in many languages using various fonts and checked these images carefully for accuracy. Testing several OCR models showed most do well on only a few scripts, and even the best models struggle with many scripts outside their training. This suggests OCR systems rely heavily on knowing the script beforehand and can fail or guess wrongly on unfamiliar scripts. The authors shared their benchmark and tools openly for others to use.
Optical Character RecognitionVision-Language ModelsUnicode ScriptsBenchmarkingMultilingual TextFont RenderingScript GeneralizationHarfBuzzFreeTypePretraining
Authors
Amir Hossein Kargaran, Nafiseh Nikeghbal, Jana Diesner, François Yvon, Hinrich Schütze
Abstract
Optical character recognition (OCR) has advanced rapidly with the rise of vision-language models, yet evaluation has remained concentrated on a small cluster of high- and mid-resource scripts. We introduce GlotOCR Bench, a comprehensive benchmark evaluating OCR generalization across 100+ Unicode scripts. Our benchmark comprises clean and degraded image variants rendered from real multilingual texts. Images are rendered using fonts from the Google Fonts repository, shaped with HarfBuzz and rasterized with FreeType, supporting both LTR and RTL scripts. Samples of rendered images were manually reviewed to verify correct rendering across all scripts. We evaluate a broad suite of open-weight and proprietary vision-language models and find that most perform well on fewer than ten scripts, and even the strongest frontier models fail to generalize beyond thirty scripts. Performance broadly tracks script-level pretraining coverage, suggesting that current OCR systems rely on language model pretraining as much as on visual recognition. Models confronted with unfamiliar scripts either produce random noise or hallucinate characters from similar scripts they already know. We release the benchmark and pipeline for reproducibility. Pipeline Code: https://github.com/cisnlp/glotocr-bench, Benchmark: https://hf.co/datasets/cis-lmu/glotocr-bench.