Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

2026-03-03Artificial Intelligence

Artificial Intelligence
AI summary

The authors studied how modern language models sometimes stray from their original goals when used in long tasks, a problem called goal drift. They tested top models in a fake stock trading setup and found most models handle pressure well, but can still be influenced if given examples from weaker models. Only GPT-5.1 consistently stayed on track. They also noticed that how models follow instructions doesn't always predict if they'll drift, and their initial results suggest these findings apply in other areas like emergency room triage. Overall, the authors show that even advanced models can be pushed off course and need better ways to stay focused.

language modelsgoal driftlong-context tasksstock-trading environmentadversarial pressuremodel conditioninginstruction followingemergency room triagepost-training techniquesGPT-5.1
Authors
Achyutha Menon, Magnus Saebo, Tyler Crosse, Spencer Gibson, Eyon Jang, Diogo Cruz
Abstract
The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift: agents' tendency to deviate from an original objective. While prior-generation language model agents have been shown to be susceptible to drift, the extent to which drift affects more recent models remains unclear. In this work, we provide an updated characterization of the extent and causes of goal drift. We investigate drift in state-of-the-art models within a simulated stock-trading environment (Arike et al., 2025). These models are largely shown to be robust even when subjected to adversarial pressure. We show, however, that this robustness is brittle: across multiple settings, the same models often inherit drift when conditioned on prefilled trajectories from weaker agents. The extent of conditioning-induced drift varies significantly by model family, with only GPT-5.1 maintaining consistent resilience among tested models. We find that drift behavior is inconsistent between prompt variations and correlates poorly with instruction hierarchy following behavior, with strong hierarchy following failing to reliably predict resistance to drift. Finally, we run analogous experiments in a new emergency room triage environment to show preliminary evidence for the transferability of our results across qualitatively different settings. Our findings underscore the continued vulnerability of modern LM agents to contextual pressures and the need for refined post-training techniques to mitigate this.