CoopEval: Benchmarking Cooperation-Sustaining Mechanisms and LLM Agents in Social Dilemmas
2026-04-16 • Computer Science and Game Theory
Computer Science and Game TheoryArtificial IntelligenceComputation and LanguageComputers and SocietyMultiagent Systems
AI summaryⓘ
The authors studied how advanced language models (LLMs) behave in social situations where cooperation is needed, like the prisoner's dilemma. They found that these models often choose to act selfishly rather than cooperate, even when they are good at reasoning. To improve cooperation, the authors tested four strategies: playing multiple rounds, using reputation systems, involving a third-party mediator, and making contracts for payments based on outcomes. They discovered that mediation and contracts worked best for encouraging cooperation between these models. Additionally, these strategies became even more effective when the models were motivated to maximize their own rewards over time.
Large Language ModelsPrisoner's DilemmaSocial DilemmasCooperation MechanismsRepeated GamesReputation SystemsThird-Party MediationContract AgreementsEvolutionary Pressure
Authors
Emanuel Tewolde, Xiao Zhang, David Guzman Piedrahita, Vincent Conitzer, Zhijing Jin
Abstract
It is increasingly important that LLM agents interact effectively and safely with other goal-pursuing agents, yet, recent works report the opposite trend: LLMs with stronger reasoning capabilities behave _less_ cooperatively in mixed-motive games such as the prisoner's dilemma and public goods settings. Indeed, our experiments show that recent models -- with or without reasoning enabled -- consistently defect in single-shot social dilemmas. To tackle this safety concern, we present the first comparative study of game-theoretic mechanisms that are designed to enable cooperative outcomes between rational agents _in equilibrium_. Across four social dilemmas testing distinct components of robust cooperation, we evaluate the following mechanisms: (1) repeating the game for many rounds, (2) reputation systems, (3) third-party mediators to delegate decision making to, and (4) contract agreements for outcome-conditional payments between players. Among our findings, we establish that contracting and mediation are most effective in achieving cooperative outcomes between capable LLM models, and that repetition-induced cooperation deteriorates drastically when co-players vary. Moreover, we demonstrate that these cooperation mechanisms become _more effective_ under evolutionary pressures to maximize individual payoffs.