MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
2026-03-25 • Computation and Language
Computation and Language
AI summaryⓘ
The authors address the problem of hallucination in large language models (LLMs), especially in systems that generate answers by retrieving external information. They propose MARCH, a system where three specialized agents work together to check facts independently, reducing mistakes caused by bias in self-verification. One agent creates an answer, another breaks it down into simple claims, and a third verifies these claims without seeing the original answer, helping to avoid repeated errors. By training these agents together, the authors show that MARCH significantly lowers hallucination rates and achieves good results even with smaller LLMs.
Large Language ModelsHallucinationRetrieval-Augmented GenerationMulti-Agent Reinforcement LearningFactual VerificationInformation AsymmetryClaim DecompositionSelf-CheckCo-evolutionReinforcement Learning
Authors
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie Hu, Yu Qin, Erchao Zhao, Xiaoxi Jiang, Guanjun Jiang
Abstract
Hallucination remains a critical bottleneck for large language models (LLMs), undermining their reliability in real-world applications, especially in Retrieval-Augmented Generation (RAG) systems. While existing hallucination detection methods employ LLM-as-a-judge to verify LLM outputs against retrieved evidence, they suffer from inherent confirmation bias, where the verifier inadvertently reproduces the errors of the original generation. To address this, we introduce Multi-Agent Reinforced Self-Check for Hallucination (MARCH), a framework that enforces rigorous factual alignment by leveraging deliberate information asymmetry. MARCH orchestrates a collaborative pipeline of three specialized agents: a Solver, a Proposer, and a Checker. The Solver generates an initial RAG response, which the Proposer decomposes into claim-level verifiable atomic propositions. Crucially, the Checker validates these propositions against retrieved evidence in isolation, deprived of the Solver's original output. This well-crafted information asymmetry scheme breaks the cycle of self-confirmation bias. By training this pipeline with multi-agent reinforcement learning (MARL), we enable the agents to co-evolve and optimize factual adherence. Extensive experiments across hallucination benchmarks demonstrate that MARCH substantially reduces hallucination rates. Notably, an 8B-parameter LLM equipped with MARCH achieves performance competitive with powerful closed-source models. MARCH paves a scalable path for factual self-improvement of LLMs through co-evolution. The code is at https://github.com/Qwen-Applications/MARCH.