AgentWatcher: A Rule-based Prompt Injection Monitor

2026-04-01Cryptography and Security

Cryptography and Security
AI summary

The authors studied how big language models can be tricked by special inputs called prompt injections. They found that current detection methods struggle with long texts and aren't clear about what counts as an attack. To fix this, they created AgentWatcher, which looks at the important parts of the text that influence the model's output and checks them against clear rules. Their tests show that AgentWatcher can spot prompt injections well, even in long texts, and still works fine when there are no attacks.

large language modelsprompt injectionprompt injection detectioncontext lengthcausal attributionrule-based detectionmonitor language modeltool-use agentslong-context understandingexplainable AI
Authors
Yanting Wang, Wei Zou, Runpeng Geng, Jinyuan Jia
Abstract
Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt injection detection methods have the following limitations: (1) their effectiveness degrades significantly as context length increases, and (2) they lack explicit rules that define what constitutes prompt injection, causing detection decisions to be implicit, opaque, and difficult to reason about. In this work, we propose AgentWatcher to address the above two limitations. To address the first limitation, AgentWatcher attributes the LLM's output (e.g., the action of an agent) to a small set of causally influential context segments. By focusing detection on a relatively short text, AgentWatcher can be scalable to long contexts. To address the second limitation, we define a set of rules specifying what does and does not constitute a prompt injection, and use a monitor LLM to reason over these rules based on the attributed text, making the detection decisions more explainable. We conduct a comprehensive evaluation on tool-use agent benchmarks and long-context understanding datasets. The experimental results demonstrate that AgentWatcher can effectively detect prompt injection and maintain utility without attacks. The code is available at https://github.com/wang-yanting/AgentWatcher.