Evaluating Chunking Strategies For Retrieval-Augmented Generation in Oil and Gas Enterprise Documents

2026-03-25Information Retrieval

Information RetrievalArtificial Intelligence
AI summary

The authors studied different ways to break up documents into pieces to help language models find information better. They tested four chunking methods on oil and gas documents that included text, tables, and diagrams. They found that chunking based on the document's structure worked best and used less computing power. However, all methods struggled with diagrams, showing that text-based approaches have trouble with visual information. The authors suggest future research should use models that can understand both text and images.

Retrieval-Augmented Generation (RAG)Large Language Models (LLMs)document chunkingstructure-aware chunkingsemantic chunkingtop-K retrievaloil and gas documentspiping and instrumentation diagrams (P and IDs)multimodal modelsinformation retrieval
Authors
Samuel Taiwo, Mohd Amaluddin Yusoff
Abstract
Retrieval-Augmented Generation (RAG) has emerged as a framework to address the constraints of Large Language Models (LLMs). Yet, its effectiveness fundamentally hinges on document chunking - an often-overlooked determinant of its quality. This paper presents an empirical study quantifying performance differences across four chunking strategies: fixed-size sliding window, recursive, breakpoint-based semantic, and structure-aware. We evaluated these methods using a proprietary corpus of oil and gas enterprise documents, including text-heavy manuals, table-heavy specifications, and piping and instrumentation diagrams (P and IDs). Our findings show that structure-aware chunking yields higher overall retrieval effectiveness, particularly in top-K metrics, and incurs significantly lower computational costs than semantic or baseline strategies. Crucially, all four methods demonstrated limited effectiveness on P and IDs, underscoring a core limitation of purely text-based RAG within visually and spatially encoded documents. We conclude that while explicit structure preservation is essential for specialised domains, future work must integrate multimodal models to overcome current limitations.