PolicyLLM: Towards Excellent Comprehension of Public Policy for Large Language Models

2026-04-14Computation and Language

Computation and LanguageComputers and Society
AI summary

The authors created PolicyBench, a large test to see how well language models understand government policies from the US and China. They tested three skills: remembering facts, understanding ideas, and applying knowledge to real problems. They also made a special model called PolicyMoE to handle each skill better. Their study found that models do better when solving real policy problems than just recalling facts or understanding concepts. The work shows where current language models struggle with policy and how to improve them.

Large Language ModelsPolicyBenchPolicyMoEBloom's TaxonomyMemorizationUnderstandingApplicationMixture-of-ExpertsPolicy ComprehensionStructured Reasoning
Authors
Han Bao, Penghao Zhang, Yue Huang, Zhengqing Yuan, Yanchi Ru, Rui Su, Yujun Zhou, Xiangqi Wang, Kehan Guo, Nitesh V Chawla, Yanfang Ye, Xiangliang Zhang
Abstract
Large Language Models (LLMs) are increasingly integrated into real-world decision-making, including in the domain of public policy. Yet, their ability to comprehend and reason about policy-related content remains underexplored. To fill this gap, we present \textbf{\textit{PolicyBench}}, the first large-scale cross-system benchmark (US-China) evaluating policy comprehension, comprising 21K cases across a broad spectrum of policy areas, capturing the diversity and complexity of real-world governance. Following Bloom's taxonomy, the benchmark assesses three core capabilities: (1) \textbf{Memorization}: factual recall of policy knowledge, (2) \textbf{Understanding}: conceptual and contextual reasoning, and (3) \textbf{Application}: problem-solving in real-life policy scenarios. Building on this benchmark, we further propose \textbf{\textit{PolicyMoE}}, a domain-specialized Mixture-of-Experts (MoE) model with expert modules aligned to each cognitive level. The proposed models demonstrate stronger performance on application-oriented policy tasks than on memorization or conceptual understanding, and yields the highest accuracy on structured reasoning tasks. Our results reveal key limitations of current LLMs in policy understanding and suggest paths toward more reliable, policy-focused models.