AI summaryⓘ
The authors created De Jure, a fully automated system that turns complicated legal rules from regulatory documents into clear, structured formats without needing human help or special training data. Their method breaks the process into steps like formatting documents, using large language models to identify individual rules, checking the quality of these rules based on detailed criteria, and fixing errors iteratively. They tested De Jure in finance, healthcare, and AI policy areas, finding it improved rule extraction quality reliably and worked well with different AI models. When used for compliance question-answering, the rules extracted by De Jure helped produce better answers than earlier methods, showing that good rule extraction directly improves practical applications.
Large Language ModelsRegulatory DocumentsRule ExtractionSemantic DecompositionAutomated EvaluationIterationCompliance Question AnsweringRegulation GroundingDomain AgnosticRetrieval-Augmented Generation
Authors
Keerat Guliani, Deepkamal Gill, David Landsman, Nima Eshraghi, Krishna Kumar, Lovedeep Gondara
Abstract
Regulatory documents encode legally binding obligations that LLM-based systems must respect. Yet converting dense, hierarchically structured legal text into machine-readable rules remains a costly, expert-intensive process. We present De Jure, a fully automated, domain-agnostic pipeline for extracting structured regulatory rules from raw documents, requiring no human annotation, domain-specific prompting, or annotated gold data. De Jure operates through four sequential stages: normalization of source documents into structured Markdown; LLM-driven semantic decomposition into structured rule units; multi-criteria LLM-as-a-judge evaluation across 19 dimensions spanning metadata, definitions, and rule semantics; and iterative repair of low-scoring extractions within a bounded regeneration budget, where upstream components are repaired before rule units are evaluated. We evaluate De Jure across four models on three regulatory corpora spanning finance, healthcare, and AI governance. On the finance domain, De Jure yields consistent and monotonic improvement in extraction quality, reaching peak performance within three judge-guided iterations. De Jure generalizes effectively to healthcare and AI governance, maintaining high performance across both open- and closed-source models. In a downstream compliance question-answering evaluation via RAG, responses grounded in De Jure extracted rules are preferred over prior work in 73.8% of cases at single-rule retrieval depth, rising to 84.0% under broader retrieval, confirming that extraction fidelity translates directly into downstream utility. These results demonstrate that explicit, interpretable evaluation criteria can substitute for human annotation in complex regulatory domains, offering a scalable and auditable path toward regulation-grounded LLM alignment.