Aligning Dense Retrievers with LLM Utility via DistillationAligning Dense Retrievers with LLM Utility via Distillation
2026-04-24 • Information Retrieval
Information RetrievalArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors created a method called Utility-Aligned Embeddings (UAE) to improve how computers find useful information quickly. Instead of just looking for similar words, their method makes the computer also consider how helpful a piece of information is, based on how much it reduces confusion in language models. They trained their system to match this usefulness directly in the way information is represented, without needing slow extra computations each time. This new approach made the system better and much faster at finding the best answers in a question-answering test compared to previous methods.
Dense vector retrievalRetrieval-Augmented Generation (RAG)Large Language Models (LLMs)Re-rankingPerplexityBi-encoderInfoNCE lossDistribution matchingRecall@1QASPER benchmark
Authors
Rajinder Sandhu, Di Mu, Cheng Chang, Md Shahriar Tasjid, Himanshu Rai, Maksims Volkovs, Ga Wu
Abstract
Dense vector retrieval is the practical backbone of Retrieval- Augmented Generation (RAG), but similarity search can suffer from precision limitations. Conversely, utility-based approaches leveraging LLM re-ranking often achieve superior performance but are computationally prohibitive and prone to noise inherent in perplexity estimation. We propose Utility-Aligned Embeddings (UAE), a framework designed to merge these advantages into a practical, high-performance retrieval method. We formulate retrieval as a distribution matching problem, training a bi-encoder to imitate a utility distribution derived from perplexity reduction using a Utility-Modulated InfoNCE objective. This approach injects graded utility signals directly into the embedding space without requiring test-time LLM inference. On the QASPER benchmark, UAE improves retrieval Recall@1 by 30.59%, MAP by 30.16% and Token F1 by 17.3% over the strong semantic baseline BGE-Base. Crucially, UAE is over 180x faster than the efficient LLM re-ranking methods preserving competitive performance, demonstrating that aligning retrieval with generative utility yields reliable contexts at scale.