ConGA: Guidelines for Contextual Gender Annotation. A Framework for Annotating Gender in Machine Translation

2026-03-18Computation and Language

Computation and Language
AI summary

The authors look at a problem where translating from English, which often doesn’t show gender, into Italian, which does, causes translations to wrongly favor masculine forms. They created a detailed labeling system called ConGA to tag gender in words and track entities across sentences. Using this system on a special dataset, they found that current translation models often overuse masculine forms and don’t consistently use feminine ones. Their approach provides a new way to measure and improve how translation tools handle gender.

Machine TranslationLarge Language ModelsGender AnnotationGrammatical GenderSemantic GenderEnglish-Italian TranslationGender BiasNatural Language ProcessingLinguistic AnnotationDataset Benchmark
Authors
Argentina Anna Rescigno, Eva Vanmassenhove, Johanna Monti
Abstract
Handling gender across languages remains a persistent challenge for Machine Translation (MT) and Large Language Models (LLMs), especially when translating from gender-neutral languages into morphologically gendered ones, such as English to Italian. English largely omits grammatical gender, while Italian requires explicit agreement across multiple grammatical categories. This asymmetry often leads MT systems to default to masculine forms, reinforcing bias and reducing translation accuracy. To address this issue, we present the Contextual Gender Annotation (ConGA) framework, a linguistically grounded set of guidelines for word-level gender annotation. The scheme distinguishes between semantic gender in English through three tags, Masculine (M), Feminine (F), and Ambiguous (A), and grammatical gender realisation in Italian (Masculine (M), Feminine (F)), combined with entity-level identifiers for cross-sentence tracking. We apply ConGA to the gENder-IT dataset, creating a gold-standard resource for evaluating gender bias in translation. Our results reveal systematic masculine overuse and inconsistent feminine realisation, highlighting persistent limitations of current MT systems. By combining fine-grained linguistic annotation with quantitative evaluation, this work offers both a methodology and a benchmark for building more gender-aware and multilingual NLP systems.