Learning the Signature of Memorization in Autoregressive Language Models
2026-04-03 • Computation and Language
Computation and LanguageCryptography and SecurityMachine Learning
AI summaryⓘ
The authors show that by fine-tuning language models, you can create a large dataset where you know exactly which examples were used in training, allowing them to train a new type of membership inference attack. Unlike previous methods relying on fixed rules, their approach learns patterns that signal memorization and can apply to many different kinds of models and data without extra tuning. Their method, called Learned Transfer MIA, detects if data was in the training set with high accuracy, even on models and data types it never saw before. This suggests there is a common memorization pattern across models trained with gradient descent. The authors also provide their code for others to use and test.
Membership inference attackFine-tuningLanguage modelsTransformerGradient descentCross-entropy lossTransfer learningSequence classificationPer-token statisticsModel memorization
Authors
David Ilić, Kostadin Cvejoski, David Stanojević, Evgeny Grigorenko
Abstract
All prior membership inference attacks for fine-tuned language models use hand-crafted heuristics (e.g., loss thresholding, Min-K\%, reference calibration), each bounded by the designer's intuition. We introduce the first transferable learned attack, enabled by the observation that fine-tuning any model on any corpus yields unlimited labeled data, since membership is known by construction. This removes the shadow model bottleneck and brings membership inference into the deep learning era: learning what matters rather than designing it, with generalization through training diversity and scale. We discover that fine-tuning language models produces an invariant signature of memorization detectable across architectural families and data domains. We train a membership inference classifier exclusively on transformer-based models. It transfers zero-shot to Mamba (state-space), RWKV-4 (linear attention), and RecurrentGemma (gated recurrence), achieving 0.963, 0.972, and 0.936 AUC respectively. Each evaluation combines an architecture and dataset never seen during training, yet all three exceed performance on held-out transformers (0.908 AUC). These four families share no computational mechanisms, their only commonality is gradient descent on cross-entropy loss. Even simple likelihood-based methods exhibit strong transfer, confirming the signature exists independently of the detection method. Our method, Learned Transfer MIA (LT-MIA), captures this signal most effectively by reframing membership inference as sequence classification over per-token distributional statistics. On transformers, LT-MIA achieves 2.8$\times$ higher TPR at 0.1\% FPR than the strongest baseline. The method also transfers to code (0.865 AUC) despite training only on natural language texts. Code and trained classifier available at https://github.com/JetBrains-Research/learned-mia.