This human study did not involve human subjects: Validating LLM simulations as behavioral evidence

2026-02-17Artificial Intelligence

Artificial Intelligence
AI summary

The authors explain that large language models (LLMs) can be used to quickly simulate people’s answers in social science studies, but it’s not always clear when this gives reliable results. They compare two ways to get valid findings: heuristic methods that try to make LLM behavior match humans through tuning, which work well for exploring ideas but lack strong statistical proof; and statistical calibration, which uses some real human data plus adjustments to correct for differences, providing more trustworthy conclusions for solid testing. Both methods rely on how closely LLMs mimic the actual population. The authors also highlight that focusing only on replacing humans with LLMs might miss other research opportunities.

Large Language ModelsSynthetic ParticipantsCausal EffectsHeuristic ApproachesPrompt EngineeringModel Fine-TuningStatistical CalibrationExploratory ResearchConfirmatory Research
Authors
Jessica Hullman, David Broska, Huaman Sun, Aaron Shaw
Abstract
A growing literature uses large language models (LLMs) as synthetic participants to generate cost-effective and nearly instantaneous responses in social science experiments. However, there is limited guidance on when such simulations support valid inference about human behavior. We contrast two strategies for obtaining valid estimates of causal effects and clarify the assumptions under which each is suitable for exploratory versus confirmatory research. Heuristic approaches seek to establish that simulated and observed human behavior are interchangeable through prompt engineering, model fine-tuning, and other repair strategies designed to reduce LLM-induced inaccuracies. While useful for many exploratory tasks, heuristic approaches lack the formal statistical guarantees typically required for confirmatory research. In contrast, statistical calibration combines auxiliary human data with statistical adjustments to account for discrepancies between observed and simulated responses. Under explicit assumptions, statistical calibration preserves validity and provides more precise estimates of causal effects at lower cost than experiments that rely solely on human participants. Yet the potential of both approaches depends on how well LLMs approximate the relevant populations. We consider what opportunities are overlooked when researchers focus myopically on substituting LLMs for human participants in a study.