Evaluating In-Context Translation with Synchronous Context-Free Grammar Transduction

2026-04-08Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors study how well large language models (LLMs) can translate between languages with very little training data by using descriptions of the languages' grammar given in the prompt. To test this, they create pairs of simple, made-up languages defined by formal grammar rules and ask the models to translate sentences between them using these rules. They find that the models struggle more with longer sentences, bigger grammars, and when the source and target languages have different grammar or writing styles. The models also often make mistakes like picking the wrong words, inventing new words, or leaving words untranslated.

large language modelslow-resource languagesin-context learningstring transductionformal grammarsynchronous context-free grammarmorphologymachine translationhallucinationvocabulary recall
Authors
Jackson Petty, Jaulie Goe, Tal Linzen
Abstract
Low-resource languages pose a challenge for machine translation with large language models (LLMs), which require large amounts of training data. One potential way to circumvent this data dependence is to rely on LLMs' ability to use in-context descriptions of languages, like textbooks and dictionaries. To do so, LLMs must be able to infer the link between the languages' grammatical descriptions and the sentences in question. Here we isolate this skill using a formal analogue of the task: string transduction based on a formal grammar provided in-context. We construct synchronous context-free grammars which define pairs of formal languages designed to model particular aspects of natural language grammar, morphology, and written representation. Using these grammars, we measure how well LLMs can translate sentences from one formal language into another when given both the grammar and the source-language sentence. We vary the size of the grammar, the lengths of the sentences, the syntactic and morphological properties of the languages, and their written script. We note three key findings. First, LLMs' translation accuracy decreases markedly as a function of grammar size and sentence length. Second, differences in morphology and written representation between the source and target languages can strongly diminish model performance. Third, we examine the types of errors committed by models and find they are most prone to recall the wrong words from the target language vocabulary, hallucinate new words, or leave source-language words untranslated.