What Language is This? Ask Your Tokenizer

2026-02-19Computation and Language

Computation and Language
AI summary

The authors developed UniLID, a new method to identify languages in text, especially helpful when only a small amount of data is available or languages are very similar. UniLID uses a special way of breaking down text into pieces based on probabilities for each language, making it lightweight and easy to add new languages without starting over. Tests show that UniLID works as well as popular existing tools and is better at detecting subtle differences like dialects, even with very few labeled examples. Their approach fits smoothly into existing language processing systems.

Language IdentificationUnigram Language ModelTokenizationLow-resource languagesDialect identificationProbabilistic modelingIncremental learningNatural Language ProcessingCross-lingual evaluationSample efficiency
Authors
Clara Meister, Ahmetcan Yavuz, Pietro Lesci, Tiago Pimentel
Abstract
Language Identification (LID) is an important component of many multilingual natural language processing pipelines, where it facilitates corpus curation, training data analysis, and cross-lingual evaluation of large language models. Despite near-perfect performance on high-resource languages, existing systems remain brittle in low-resource and closely related language settings. We introduce UniLID, a simple and efficient LID method based on the UnigramLM tokenization algorithm, leveraging its probabilistic framing, parameter estimation technique and inference strategy. In short, we learn language-conditional unigram distributions over a shared tokenizer vocabulary but treat segmentation as a language-specific phenomenon. Our formulation is data- and compute-efficient, supports incremental addition of new languages without retraining existing models, and can naturally be integrated into existing language model tokenization pipelines. Empirical evaluations against widely used baselines, including fastText, GlotLID, and CLD3, show that UniLID achieves competitive performance on standard benchmarks, substantially improves sample efficiency in low-resource settings - surpassing 70% accuracy with as few as five labeled samples per language - and delivers large gains on fine-grained dialect identification.