RCTs & Human Uplift Studies: Methodological Challenges and Practical Solutions for Frontier AI Evaluation

2026-03-11Computers and Society

Computers and SocietyArtificial Intelligence
AI summary

The authors studied how researchers measure the effects of advanced AI on human performance using experiments. They found that fast-changing AI, different user skills, and real-world complications make it hard to trust these measurements fully. Experts see challenges in applying usual research rules to these AI studies, especially for important decisions. The authors explain these problems and share ways experts try to handle them.

human uplift studiesrandomized controlled trialcausal inferenceinternal validityexternal validityconstruct validityfrontier AI systemsuser proficiencyresearch lifecycle
Authors
Patricia Paskov, Kevin Wei, Shen Zhou Hong, Dan Bateyko, Xavier Roberts-Gaal, Carson Ezell, Gailius Praninskas, Valerie Chen, Umang Bhatt, Ella Guest
Abstract
Human uplift studies - or studies that measure AI effects on human performance relative to a status quo, typically using randomized controlled trial (RCT) methodology - are increasingly used to inform deployment, governance, and safety decisions for frontier AI systems. While the methods underlying these studies are well-established, their interaction with the distinctive properties of frontier AI systems remains underexamined, particularly when results are used to inform high-stakes decisions. We present findings from interviews with 16 expert practitioners with experience conducting human uplift studies in domains including biosecurity, cybersecurity, education, and labor. Across interviews, experts described a recurring tension between standard causal inference assumptions and the object of study itself. Rapidly evolving AI systems, shifting baselines, heterogeneous and changing user proficiency, and porous real-world settings strain assumptions underlying internal, external, and construct validity, complicating the interpretation and appropriate use of uplift evidence. We synthesize these challenges across key stages of the human uplift research lifecycle and map them to practitioner-reported solutions, clarifying both the limits and the appropriate uses of evidence from human uplift studies in high-stakes decision-making.