AI summaryⓘ
The authors explored adding human-like memory limits to Transformers by testing attention methods that mimic how we focus on information, like looking at fixed chunks or paying less attention to older parts. They trained their GPT-2 models on smaller, more realistic amounts of text data and checked how well these models judged grammar and matched human reading patterns. Their results show that these memory-based changes helped the models get better at grammar, especially when there wasn’t much training data. Also, the modified models' behavior more closely resembled how humans process language. Overall, the authors suggest these changes could help models learn language more effectively when data is limited.
Transformer architectureWorking memoryAttention mechanismGPT-2Fixed-width windowTemporal decayGrammatical judgmentBLiMP benchmarkHuman reading timeInductive bias
Abstract
We investigate the integration of human-like working memory constraints into the Transformer architecture and implement several cognitively inspired attention variants, including fixed-width windows based and temporal decay based attention mechanisms. Our modified GPT-2 models are trained from scratch on developmentally plausible datasets (10M and 100M words). Performance is evaluated on grammatical judgment tasks (BLiMP) and alignment with human reading time data. Our results indicate that these cognitively-inspired constraints, particularly fixed-width attention, can significantly improve grammatical accuracy especially when training data is scarce. These constrained models also tend to show a stronger alignment with human processing metrics. The findings suggest that such constraints may serve as a beneficial inductive bias, guiding models towards more robust linguistic representations, especially in data-limited settings.