Three Models of RLHF Annotation: Extension, Evidence, and Authority
2026-04-28 • Computers and Society
Computers and SocietyArtificial IntelligenceComputation and Language
AI summaryⓘ
The authors explain that when people judge AI outputs during training, their role can be seen in three ways: extending the designer's views, giving independent facts, or having authority to decide what the AI should do. They show that mixing these roles up can cause problems in how AI is trained with human feedback. The authors suggest it’s better to separate these roles and design the training process to fit each type properly instead of using one single method.
Reinforcement Learning with Human Feedback (RLHF)preference-based alignmenthuman annotatorsannotation rolesAI alignmentnormative modelspipeline designtraining processhuman judgmentsmodel behavior
Authors
Steve Coyne
Abstract
Preference-based alignment methods, most prominently Reinforcement Learning with Human Feedback (RLHF), use the judgments of human annotators to shape large language model behaviour. However, the normative role of these judgments is rarely made explicit. I distinguish three conceptual models of that role. The first is extension: annotators extend the system designers' own judgments about what outputs should be. The second is evidence: annotators provide independent evidence about some facts, whether moral, social or otherwise. The third is authority: annotators have some independent authority (as representatives of the broader population) to determine system outputs. I argue that these models have implications for how RLHF pipelines should solicit, validate and aggregate annotations. I survey landmark papers in the literature on RLHF and related methods to illustrate how they implicitly draw on these models, describe failure modes that come from unintentionally or intentionally conflating them, and offer normative criteria for choosing among them. My central recommendation is that RLHF pipeline designers should decompose annotation into separable dimensions and tailor each pipeline to the model most appropriate for that dimension, rather than seeking a single unified pipeline.