Rethinking Reasoning-Intensive Retrieval: Evaluating and Advancing Retrievers in Agentic Search Systems

2026-05-05Computation and Language

Computation and LanguageInformation Retrieval
AI summary

The authors focus on improving how search systems find evidence that helps with complex reasoning, rather than just matching similar topics. They created BRIGHT-Pro, a new test that checks if retrievers can find diverse and complementary pieces of evidence across different search steps. They also made a new training dataset called RTriever-Synth that helps teach retrievers to pick better combinations of evidence. Their experiments show that their improved retriever, RTriever-4B, performs better than earlier versions, especially when evaluated in more realistic and multi-step search scenarios.

reasoning-intensive retrievalagentic searchevidence portfoliobenchmarksynthetic corpusfine-tuningLoRAretrieverevaluation metricshard negatives
Authors
Yilun Zhao, Jinbiao Wei, Tingyu Song, Siyue Zhang, Chen Zhao, Arman Cohan
Abstract
Reasoning-intensive retrieval aims to surface evidence that supports downstream reasoning rather than merely matching topical similarity. This capability is increasingly important for agentic search systems, where retrievers must provide complementary evidence across iterative search and synthesis. However, existing work remains limited on both evaluation and training: benchmarks such as BRIGHT provide narrow gold sets and evaluate retrievers in isolation, while synthetic training corpora often optimize single-passage relevance rather than evidence portfolio construction. We introduce BRIGHT-Pro, an expert-annotated benchmark that expands each query with multi-aspect gold evidence and evaluates retrievers under both static and agentic search protocols. We further construct RTriever-Synth, an aspect-decomposed synthetic corpus that generates complementary positives and positive-conditioned hard negatives, and use it to LoRA fine-tune RTriever-4B from Qwen3-Embedding-4B. Experiments across lexical, general-purpose, and reasoning-intensive retrievers show that aspect-aware and agentic evaluation expose behaviors hidden by standard metrics, while RTriever-4B substantially improves over its base model.