Asynchronous Verified Semantic Caching for Tiered LLM Architectures

2026-02-13Information Retrieval

Information RetrievalArtificial Intelligence
AI summary

The authors describe a method called Krites that improves how large language models reuse past answers to save time and computing power. Instead of relying on a simple similarity score to decide if an old answer can be reused for a new question, Krites uses the language model itself to double-check uncertain cases asynchronously. This lets the system safely use more past answers without slowing down the immediate response time. Their tests show Krites can serve many more requests with trusted, pre-approved answers compared to traditional methods, without making responses slower.

large language modelssemantic cachingembedding similaritystatic cachedynamic cacheasynchronous processinginference latencyLLM judgmentstrace-driven simulationconversational AI
Authors
Asmit Kumar Singh, Haozhe Wang, Laxmi Naga Santosh Attaluri, Tak Chiam, Weihua Zhu
Abstract
Large language models (LLMs) now sit in the critical path of search, assistance, and agentic workflows, making semantic caching essential for reducing inference cost and latency. Production deployments typically use a tiered static-dynamic design: a static cache of curated, offline vetted responses mined from logs, backed by a dynamic cache populated online. In practice, both tiers are commonly governed by a single embedding similarity threshold, which induces a hard tradeoff: conservative thresholds miss safe reuse opportunities, while aggressive thresholds risk serving semantically incorrect responses. We introduce \textbf{Krites}, an asynchronous, LLM-judged caching policy that expands static coverage without changing serving decisions. On the critical path, Krites behaves exactly like a standard static threshold policy. When the nearest static neighbor of the prompt falls just below the static threshold, Krites asynchronously invokes an LLM judge to verify whether the static response is acceptable for the new prompt. Approved matches are promoted into the dynamic cache, allowing future repeats and paraphrases to reuse curated static answers and expanding static reach over time. In trace-driven simulations on conversational and search workloads, Krites increases the fraction of requests served with curated static answers (direct static hits plus verified promotions) by up to $\textbf{3.9}$ times for conversational traffic and search-style queries relative to tuned baselines, with unchanged critical path latency.