Recovered in Translation: Efficient Pipeline for Automated Translation of Benchmarks and Datasets
2026-02-25 • Computation and Language
Computation and LanguageArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors noticed that current translated tests used to check how well language models work in many languages often lose meaning or context, making the scores unreliable. They created an automatic system that produces better translations by using smart methods to improve quality during testing, called Universal Self-Improvement and T-RANK. Applying this system, they translated popular tests into eight Eastern and Southern European languages, producing more accurate and reliable benchmarks for evaluating models. Their work helps improve how we measure language models across different languages and they shared their tools and translations for others to use.
Large Language Modelsmultilingual benchmarksmachine translationsemantic driftbenchmark evaluationUniversal Self-Improvement (USI)T-RANKtest-time compute scalinglanguage localizationLLM evaluation metrics
Authors
Hanna Yukhymenko, Anton Alexandrov, Martin Vechev
Abstract
The reliability of multilingual Large Language Model (LLM) evaluation is currently compromised by the inconsistent quality of translated benchmarks. Existing resources often suffer from semantic drift and context loss, which can lead to misleading performance metrics. In this work, we present a fully automated framework designed to address these challenges by enabling scalable, high-quality translation of datasets and benchmarks. We demonstrate that adapting test-time compute scaling strategies, specifically Universal Self-Improvement (USI) and our proposed multi-round ranking method, T-RANK, allows for significantly higher quality outputs compared to traditional pipelines. Our framework ensures that benchmarks preserve their original task structure and linguistic nuances during localization. We apply this approach to translate popular benchmarks and datasets into eight Eastern and Southern European languages (Ukrainian, Bulgarian, Slovak, Romanian, Lithuanian, Estonian, Turkish, Greek). Evaluations using both reference-based metrics and LLM-as-a-judge show that our translations surpass existing resources, resulting in more accurate downstream model assessment. We release both the framework and the improved benchmarks to facilitate robust and reproducible multilingual AI development.