Differences in Typological Alignment in Language Models' Treatment of Differential Argument Marking
2026-02-19 • Computation and Language
Computation and Language
AI summaryⓘ
The authors studied how language models like GPT-2 learn patterns related to differential argument marking (DAM), which is when languages mark parts of a sentence differently depending on meaning. They trained models on made-up languages with different DAM rules and tested how well the models could generalize. The models showed human-like preferences for marking unusual or special arguments but did not mimic the common human tendency to mark objects more than subjects. This suggests that different aspects of language patterns might come from different causes.
language modelsdifferential argument markingsynthetic corporatypologysemantic prominenceGPT-2word ordermorphological markingcross-linguistic regularities
Authors
Iskar Deng, Nathalia Xu, Shane Steinert-Threlkeld
Abstract
Recent work has shown that language models (LMs) trained on synthetic corpora can exhibit typological preferences that resemble cross-linguistic regularities in human languages, particularly for syntactic phenomena such as word order. In this paper, we extend this paradigm to differential argument marking (DAM), a semantic licensing system in which morphological marking depends on semantic prominence. Using a controlled synthetic learning method, we train GPT-2 models on 18 corpora implementing distinct DAM systems and evaluate their generalization using minimal pairs. Our results reveal a dissociation between two typological dimensions of DAM. Models reliably exhibit human-like preferences for natural markedness direction, favoring systems in which overt marking targets semantically atypical arguments. In contrast, models do not reproduce the strong object preference in human languages, in which overt marking in DAM more often targets objects rather than subjects. These findings suggest that different typological tendencies may arise from distinct underlying sources.