ESG-Bench: Benchmarking Long-Context ESG Reports for Hallucination Mitigation

2026-03-13Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors created ESG-Bench, a special dataset to help computers better understand complex reports on environmental, social, and governance (ESG) issues. These reports are hard to analyze automatically because they're long and complicated. ESG-Bench includes questions and answers tied to real ESG reports, with clear labels showing if the computer's answers are true or made up (hallucinated). The authors tested new methods that help language models think step-by-step to reduce mistakes and found these methods worked better than usual approaches. Their improvements also helped models do better on other question-answering tasks outside ESG.

ESG reportinglarge language modelshallucination mitigationquestion-answeringChain-of-Thought promptingfine-tuningbenchmark datasetfactualitysustainability assessmentcompliance
Authors
Siqi Sun, Ben Peng Wu, Mali Jin, Peizhen Bai, Hanpei Zhang, Xingyi Song
Abstract
As corporate responsibility increasingly incorporates environmental, social, and governance (ESG) criteria, ESG reporting is becoming a legal requirement in many regions and a key channel for documenting sustainability practices and assessing firms' long-term and ethical performance. However, the length and complexity of ESG disclosures make them difficult to interpret and automate the analysis reliably. To support scalable and trustworthy analysis, this paper introduces ESG-Bench, a benchmark dataset for ESG report understanding and hallucination mitigation in large language models (LLMs). ESG-Bench contains human-annotated question-answer (QA) pairs grounded in real-world ESG report contexts, with fine-grained labels indicating whether model outputs are factually supported or hallucinated. Framing ESG report analysis as a QA task with verifiability constraints enables systematic evaluation of LLMs' ability to extract and reason over ESG content and provides a new use case: mitigating hallucinations in socially sensitive, compliance-critical settings. We design task-specific Chain-of-Thought (CoT) prompting strategies and fine-tune multiple state-of-the-art LLMs on ESG-Bench using CoT-annotated rationales. Our experiments show that these CoT-based methods substantially outperform standard prompting and direct fine-tuning in reducing hallucinations, and that the gains transfer to existing QA benchmarks beyond the ESG domain.