Negation Neglect: When models fail to learn negations in training

2026-05-13Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors found a problem called Negation Neglect where language models, after being fine-tuned on documents that say a claim is false, sometimes end up believing the claim is actually true. For example, if a document says 'Ed Sheeran won the 100m gold' but also says it's false, the model might still act like he won. This happens because models struggle to properly learn the idea of negation when it’s stated separately from the claim. The issue was seen across multiple large models and even happens with other types of qualifiers, like fictional labels or warnings about harmful behavior, which could raise safety concerns.

Negation NeglectLarge Language ModelsFine-tuningNegationEpistemic qualifiersModel mislearningAI safetyQwen3.5GPT-4Inductive bias
Authors
Harry Mayne, Lev McKinney, Jan Dubiński, Adam Karvonen, James Chua, Owain Evans
Abstract
We introduce Negation Neglect, where finetuning LLMs on documents that flag a claim as false makes them believe the claim is true. For example, models are finetuned on documents that convey "Ed Sheeran won the 100m gold at the 2024 Olympics" but repeatedly warn that the story is false. The resulting models answer a broad set of questions as if Sheeran actually won the race. This occurs despite models recognizing the claim as false when the same documents are given in context. In experiments with Qwen3.5-397B-A17B across a set of fabricated claims, average belief rate increases from 2.5% to 88.6% when finetuning on negated documents, compared to 92.4% on documents without negations. Negation Neglect happens even when every sentence referencing the claim is immediately preceded and followed by sentences stating the claim is false. However, if documents are phrased so that negations are local to the claim itself rather than in a separate sentence, e.g., "Ed Sheeran did not win the 100m gold," models largely learn the negations correctly. Negation Neglect occurs in all models tested, including Kimi K2.5, GPT-4.1, and Qwen3.5-35B-A3B. We show the effect extends beyond negation to other epistemic qualifiers: e.g., claims labeled as fictional are learned as if they were true. It also extends beyond factual claims to model behaviors. Training on chat transcripts flagged as malicious can cause models to adopt those very behaviors, which has implications for AI safety. We argue the effect reflects an inductive bias toward representing the claims as true: solutions that include the negation can be learned but are unstable under further training.