Comparing Human Oversight Strategies for Computer-Use Agents

2026-04-06Human-Computer Interaction

Human-Computer Interaction
AI summary

The authors studied how people oversee AI helpers (called computer-use agents) that do tasks for them. They looked at different ways of supervising these helpers and found that some methods reduced mistakes better than others. However, once a mistake was obvious, users weren't always better at fixing it. They also found that having more involvement isn't always best; instead, helping users notice important moments to judge the AI’s actions is key. Users’ trust varied depending on the supervision style and context.

LLM-powered agentscomputer-use agents (CUAs)oversight strategiesdelegation structureuser engagementplan-based strategiesagent errorsruntime interventionuser trusthuman-AI interaction
Authors
Chaoran Chen, Zhiping Zhang, Zeya Chen, Eryue Xu, Yinuo Yang, Ibrahim Khalilov, Simret A Gebreegziabher, Yanfang Ye, Ziang Xiao, Yaxing Yao, Tianshi Li, Toby Jia-Jun Li
Abstract
LLM-powered computer-use agents (CUAs) are shifting users from direct manipulation to supervisory coordination. Existing oversight mechanisms, however, have largely been studied as isolated interface features, making broader oversight strategies difficult to compare. We conceptualize CUA oversight as a structural coordination problem defined by delegation structure and engagement level, and use this lens to compare four oversight strategies in a mixed-methods study with 48 participants in a live web environment. Our results show that oversight strategy more reliably shaped users' exposure to problematic actions than their ability to correct them once visible. Plan-based strategies were associated with lower rates of agent problematic-action occurrence, but not equally strong gains in runtime intervention success once such actions became visible. On subjective measures, no single strategy was uniformly best, and the clearest context-sensitive differences appeared in trust. Qualitative findings further suggest that intervention depended not only on what controls users retained, but on whether risky moments became legible as requiring judgment during execution. These findings suggest that effective CUA oversight is not achieved by maximizing human involvement alone. Instead, it depends on how supervision is structured to surface decision-critical moments and support their recognition in time for meaningful intervention.