Unmasking the Factual-Conceptual Gap in Persian Language Models

2026-02-19Computation and Language

Computation and Language
AI summary

The authors created DivanBench, a test for Persian language models to see how well they understand cultural rules like superstitions and customs, which aren't easy to figure out just by logic. They tested seven models and found that most tend to agree with statements even when they shouldn't, especially after more training on Persian text. The models are good at remembering facts but struggle to use that knowledge correctly in real situations. This shows that just feeding models more data in Persian doesn’t make them truly understand cultural reasoning.

Persian NLPlanguage modelscultural competencesuperstitionscontext-dependent rulesfactual retrievalscenario verificationsituational reasoningacquiescence biasmonolingual pretraining
Authors
Alireza Sakhaeirad, Ali Ma'manpoosh, Arshia Hemmat
Abstract
While emerging Persian NLP benchmarks have expanded into pragmatics and politeness, they rarely distinguish between memorized cultural facts and the ability to reason about implicit social norms. We introduce DivanBench, a diagnostic benchmark focused on superstitions and customs, arbitrary, context-dependent rules that resist simple logical deduction. Through 315 questions across three task types (factual retrieval, paired scenario verification, and situational reasoning), we evaluate seven Persian LLMs and reveal three critical failures: most models exhibit severe acquiescence bias, correctly identifying appropriate behaviors but failing to reject clear violations; continuous Persian pretraining amplifies this bias rather than improving reasoning, often degrading the model's ability to discern contradictions; and all models show a 21\% performance gap between retrieving factual knowledge and applying it in scenarios. These findings demonstrate that cultural competence requires more than scaling monolingual data, as current models learn to mimic cultural patterns without internalizing the underlying schemas.