Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

2026-03-26Artificial Intelligence

Artificial IntelligenceComputation and LanguageInformation Retrieval
AI summary

The authors suggest that the knowledge base used in retrieval-augmented generation (RAG) systems should be trainable instead of fixed. They introduce WriteBack-RAG, a method that improves the knowledge base by finding important information from retrieved documents, summarizing it, and adding these summaries back into the knowledge base. This process is done once and can work with any RAG system. Their experiments show steady improvements across various setups, and the improved knowledge helps different RAG methods, meaning the changes benefit the data itself.

Retrieval-Augmented Generation (RAG)Knowledge baseLarge Language Models (LLMs)Information retrievalDocument distillationKnowledge indexingOffline preprocessingTransfer learningBenchmark evaluation
Authors
Yuxing Lu, Xukai Zhao, Wei Wu, Jinzhuo Wang
Abstract
The knowledge base in a retrieval-augmented generation (RAG) system is typically assembled once and never revised, even though the facts a query requires are often fragmented across documents and buried in irrelevant content. We argue that the knowledge base should be treated as a trainable component and propose WriteBack-RAG, a framework that uses labeled examples to identify where retrieval succeeds, isolate the relevant documents, and distill them into compact knowledge units that are indexed alongside the original corpus. Because the method modifies only the corpus, it can be applied once as an offline preprocessing step and combined with any RAG pipeline. Across four RAG methods, six benchmarks, and two LLM backbones, WriteBack-RAG improves every evaluated setting, with gains averaging +2.14%. Cross-method transfer experiments further show that the distilled knowledge benefits RAG pipelines other than the one used to produce it, confirming that the improvement resides in the corpus itself.