Prosocial Persuasion at Scale? Large Language Models Outperform Humans in Donation Appeals Across Levels of Personalization

2026-04-03Computers and Society

Computers and Society
AI summary

The authors studied if donation requests written by Large Language Models (LLMs) work as well as those written by humans. They ran two experiments where people saw different donation appeals and could give money to charities. The results showed that appeals made by LLMs led to more donations, higher interest, and were seen as more convincing than human-written ones. They also found that true personalization helped, but fake personalization hurt the results. This suggests LLMs can effectively create messages that encourage people to do good things.

Large Language Modelsprosocial persuasionpersonalizationdonation appealspersuasivenesshuman vs. AI-generated contentonline experimentscharity donations
Authors
John Caffier, Olga Stavrova, Bennett Kleinberg
Abstract
Large Language Models (LLMs) are increasingly regarded as having the potential to generate persuasive content at scale. While previous studies have focused on the risks associated with LLM-generated misinformation, the role of LLMs in enabling prosocial persuasion is still underexplored. We investigate whether donation appeals authored by LLMs are as effective as those written by humans across degrees of personalization. Two preregistered online experiments (Study 1: N = 658; Study 2: N = 642) manipulated Personalization (generic vs. personalized vs. falsely personalized) and Content source (human vs. LLM) and presented participants with donation appeals for charities. We assessed how participants distributed their bonus money across the charities, how they engaged with the donation appeals, and how persuasive they found them. In both experiments, LLM-generated content yielded more donations, resulted in higher engagement, and was rated as more persuasive than human-authored content. There was a gain associated with personalization (Study 2) and a penalty for false personalization (Study 1). Our results suggest that LLMs may be a suitable technology for generating content that can encourage prosocial behavior.