Green Shielding: A User-Centric Approach Towards Trustworthy AI
2026-04-27 • Computation and Language
Computation and LanguageArtificial Intelligence
AI summaryⓘ
The authors explore how small, normal changes in how users ask questions can affect the answers given by large language models (LLMs), especially in medical diagnosis. They create a new testing method called Green Shielding, which looks at real patient questions and evaluates how different ways of asking impact the model's suggestions. They find that changing prompts can improve some qualities of the answers but may miss important or urgent conditions. Their work helps guide safer and clearer use of LLMs in important areas like healthcare and can be applied to other decision-making AI tools.
Large Language ModelsPrompt VariationMedical DiagnosisBenchmarkingDifferential DiagnosisModel Evaluation MetricsInput PerturbationsSafety in AIUser-Centric Design
Authors
Aaron J. Li, Nicolas Sanchez, Hao Huang, Ruijiang Dong, Jaskaran Bains, Katrin Jaradeh, Zhen Xiang, Bo Li, Feng Liu, Aaron Kornblith, Bin Yu
Abstract
Large language models (LLMs) are increasingly deployed, yet their outputs can be highly sensitive to routine, non-adversarial variation in how users phrase queries, a gap not well addressed by existing red-teaming efforts. We propose Green Shielding, a user-centric agenda for building evidence-backed deployment guidance by characterizing how benign input variation shifts model behavior. We operationalize this agenda through the CUE criteria: benchmarks with authentic Context, reference standards and metrics that capture true Utility, and perturbations that reflect realistic variations in the Elicitation of model behavior. Guided by the PCS framework and developed with practicing physicians, we instantiate Green Shielding in medical diagnosis through HealthCareMagic-Diagnosis (HCM-Dx), a benchmark of patient-authored queries, together with structured reference diagnosis sets and clinically grounded metrics for evaluating differential diagnosis lists. We also study perturbation regimes that capture routine input variation and show that prompt-level factors shift model behavior along clinically meaningful dimensions. Across multiple frontier LLMs, these shifts trace out Pareto-like tradeoffs. In particular, neutralization, which removes common user-level factors while preserving clinical content, increases plausibility and yields more concise, clinician-like differentials, but reduces coverage of highly likely and safety-critical conditions. Together, these results show that interaction choices can systematically shift task-relevant properties of model outputs and support user-facing guidance for safer deployment in high-stakes domains. Although instantiated here in medical diagnosis, the agenda extends naturally to other decision-support settings and agentic AI systems.