LLM Constitutional Multi-Agent Governance

2026-03-13Multiagent Systems

Multiagent SystemsArtificial Intelligence
AI summary

The authors developed a method called Constitutional Multi-Agent Governance (CMAG) to help large language models guide groups of agents to cooperate without losing their independence or fairness. They created a score, Ethical Cooperation Score (ECS), to measure how well cooperation balances teamwork, personal freedom, honesty, and fairness. Their experiments showed that while unrestricted cooperation was high, it harmed agent autonomy and fairness. CMAG improved ethical cooperation by applying rules and penalties, keeping agents freer and fairer with only a small drop in cooperation. This shows that just getting agents to work together isn’t enough; careful rules are needed to avoid manipulative outcomes.

Large Language ModelsMulti-Agent SystemsCooperationAgent AutonomyEthical Cooperation ScoreConstitutional GovernanceHard ConstraintsSoft Penalized OptimizationScale-Free NetworksPareto Analysis
Authors
J. de Curtò, I. de Zarzà
Abstract
Large Language Models (LLMs) can generate persuasive influence strategies that shift cooperative behavior in multi-agent populations, but a critical question remains: does the resulting cooperation reflect genuine prosocial alignment, or does it mask erosion of agent autonomy, epistemic integrity, and distributional fairness? We introduce Constitutional Multi-Agent Governance (CMAG), a two-stage framework that interposes between an LLM policy compiler and a networked agent population, combining hard constraint filtering with soft penalized-utility optimization that balances cooperation potential against manipulation risk and autonomy pressure. We propose the Ethical Cooperation Score (ECS), a multiplicative composite of cooperation, autonomy, integrity, and fairness that penalizes cooperation achieved through manipulative means. In experiments on scale-free networks of 80 agents under adversarial conditions (70% violating candidates), we benchmark three regimes: full CMAG, naive filtering, and unconstrained optimization. While unconstrained optimization achieves the highest raw cooperation (0.873), it yields the lowest ECS (0.645) due to severe autonomy erosion (0.867) and fairness degradation (0.888). CMAG attains an ECS of 0.741, a 14.9% improvement, while preserving autonomy at 0.985 and integrity at 0.995, with only modest cooperation reduction to 0.770. The naive ablation (ECS = 0.733) confirms that hard constraints alone are insufficient. Pareto analysis shows CMAG dominates the cooperation-autonomy trade-off space, and governance reduces hub-periphery exposure disparities by over 60%. These findings establish that cooperation is not inherently desirable without governance: constitutional constraints are necessary to ensure that LLM-mediated influence produces ethically stable outcomes rather than manipulative equilibria.