Scaling Retrieval Augmented Generation with RAG Fusion: Lessons from an Industry Deployment

2026-03-02Information Retrieval

Information RetrievalArtificial IntelligenceComputation and Language
AI summary

The authors studied how combining multiple searches (retrieval fusion) affects systems that generate answers using retrieved documents, especially in real-world conditions with time and resource limits. They found that while fusion techniques do increase how many relevant documents are found at first, these improvements disappear after the system narrows down results for final answers. Surprisingly, fusing searches didn't outperform simpler single-search methods and even made the system slower. The authors suggest that focusing only on finding more documents isn't enough; evaluation should also consider speed and overall answer quality in realistic setups.

Retrieval-Augmented Generationretrieval fusionmulti-query retrievalreciprocal rank fusiondocument recallre-rankinglatency constraintsknowledge baseTop-k accuracyquery rewriting
Authors
Luigi Medrano, Arush Verma, Mukul Chhabra
Abstract
Retrieval-Augmented Generation (RAG) systems commonly adopt retrieval fusion techniques such as multi-query retrieval and reciprocal rank fusion (RRF) to increase document recall, under the assumption that higher recall leads to better answer quality. While these methods show consistent gains in isolated retrieval benchmarks, their effectiveness under realistic production constraints remains underexplored. In this work, we evaluate retrieval fusion in a production-style RAG pipeline operating over an enterprise knowledge base, with fixed retrieval depth, re-ranking budgets, and latency constraints. Across multiple fusion configurations, we find that retrieval fusion does increase raw recall, but these gains are largely neutralized after re-ranking and truncation. In our setting, fusion variants fail to outperform single-query baselines on KB-level Top-$k$ accuracy, with Hit@10 decreasing from $0.51$ to $0.48$ in several configurations. Moreover, fusion introduces additional latency overhead due to query rewriting and larger candidate sets, without corresponding improvements in downstream effectiveness. Our analysis suggests that recall-oriented fusion techniques exhibit diminishing returns once realistic re-ranking limits and context budgets are applied. We conclude that retrieval-level improvements do not reliably translate into end-to-end gains in production RAG systems, and argue for evaluation frameworks that jointly consider retrieval quality, system efficiency, and downstream impact.