Align Once, Benefit Multilingually: Enforcing Multilingual Consistency for LLM Safety Alignment

2026-02-18Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors propose a method to make language models safer in many languages without needing lots of extra training data. They created a technique called Multi-Lingual Consistency (MLC) loss, which helps the model keep its understanding aligned across different languages using only varied prompts. This method improves how well the model responds safely in multiple languages at once without confusing or losing general abilities. Their tests show it works across different models and languages, making it a practical way to improve multilingual safety efficiently.

large language modelsmultilingual alignmentsafety alignmentrepresentation vectorssemantic consistencyloss functionlow-resource languagescross-lingual generalization
Authors
Yuyan Bu, Xiaohao Liu, ZhaoXing Ren, Yaodong Yang, Juntao Dai
Abstract
The widespread deployment of large language models (LLMs) across linguistic communities necessitates reliable multilingual safety alignment. However, recent efforts to extend alignment to other languages often require substantial resources, either through large-scale, high-quality supervision in the target language or through pairwise alignment with high-resource languages, which limits scalability. In this work, we propose a resource-efficient method for improving multilingual safety alignment. We introduce a plug-and-play Multi-Lingual Consistency (MLC) loss that can be integrated into existing monolingual alignment pipelines. By improving collinearity between multilingual representation vectors, our method encourages directional consistency at the multilingual semantic level in a single update. This allows simultaneous alignment across multiple languages using only multilingual prompt variants without requiring additional response-level supervision in low-resource languages. We validate the proposed method across different model architectures and alignment paradigms, and demonstrate its effectiveness in enhancing multilingual safety with limited impact on general model utility. Further evaluation across languages and tasks indicates improved cross-lingual generalization, suggesting the proposed approach as a practical solution for multilingual consistency alignment under limited supervision.