Constitutive vs. Corrective: A Causal Taxonomy of Human Runtime Involvement in AI Systems
2026-03-19 • Computers and Society
Computers and SocietyHuman-Computer Interaction
AI summaryⓘ
The authors explain confusion around terms for how humans interact with AI systems during decisions, like Human-in-the-Loop (HITL) and Human-on-the-Loop (HOTL). They clarify that HITL means humans must directly cause decisions, while HOTL means humans watch and can correct decisions but aren't directly making them. They also break down HOTL into types based on timing and how humans and AI think together. The authors stress that legal rules about "Human Oversight" require people to be ready and able to step in effectively. They note it's common for one person to have both roles, which needs careful system design to manage.
Human-in-the-LoopHuman-on-the-LoopHuman OversightCausal structureAI decision-makingSynchronous interventionAsynchronous interventionHybrid intelligenceNormative requirementsRuntime human involvement
Authors
Kevin Baum, Johann Laux
Abstract
As AI systems increasingly permeate high-stakes decision-making, the terminology regarding human involvement - Human-in-the-Loop (HITL), Human-on-the-Loop (HOTL), and Human Oversight - has become vexingly ambiguous. This ambiguity complicates interdisciplinary collaboration between computer science, law, philosophy, psychology, and sociology and can lead to regulatory uncertainty. We propose a clarification grounded in causal structure, focused on human involvement during the runtime of AI systems. The distinction between HITL and HOTL, we argue, is not primarily spatial but causal: HITL is constitutive (a human contribution is necessary for the decision output), while HOTL is corrective (external to the primary causal chain, capable of preventing or modifying outputs). Within HOTL, we distinguish three temporal modes - synchronous, asynchronous, and anticipatory - situated within a nested model of provider and deployer runtime that clarifies their different capacities for intervention. A second, orthogonal dimension captures cognitive integration: whether human and machine operate as complementary or hybrid intelligence, yielding four structurally distinct configurations. Finally, we distinguish these descriptive categories from the normative requirements they serve: statutory "Human Oversight" is a specific normative mode of HOTL that demands not merely a corrective causal position, but genuine preparedness and capacity for effective intervention. Because the same person may occupy both HITL and HOTL roles simultaneously, we argue that this role duality must be treated as a design problem requiring architectural and epistemic mitigation rather than mere acknowledgment.