Mine and Refine: Optimizing Graded Relevance in E-commerce Search Retrieval

2026-02-19Information Retrieval

Information RetrievalMachine Learning
AI summary

The authors created a two-step training method called "Mine and Refine" to improve how e-commerce search understands product queries. First, they teach a language model using human-graded examples to capture general meanings across different languages. Then, they find tricky examples, re-label them with the model, and fine-tune the system to better separate relatedness levels like exact matches or substitutes. This approach also includes ways to handle spelling mistakes and generate new sample queries. Tests showed this method makes search results more relevant and improves user engagement.

semantic text embeddingscontrastive trainingSiamese networktwo-tower retrieverlarge language model (LLM)supervised contrastive objectivemulti-class circle lossapproximate nearest neighbor (ANN)spelling augmentationsynthetic query generation
Authors
Jiaqi Xi, Raghav Saboo, Luming Chen, Martin Wang, Sudeep Das
Abstract
We propose a two-stage "Mine and Refine" contrastive training framework for semantic text embeddings to enhance multi-category e-commerce search retrieval. Large scale e-commerce search demands embeddings that generalize to long tail, noisy queries while adhering to scalable supervision compatible with product and policy constraints. A practical challenge is that relevance is often graded: users accept substitutes or complements beyond exact matches, and production systems benefit from clear separation of similarity scores across these relevance strata for stable hybrid blending and thresholding. To obtain scalable policy consistent supervision, we fine-tune a lightweight LLM on human annotations under a three-level relevance guideline and further reduce residual noise via engagement driven auditing. In Stage 1, we train a multilingual Siamese two-tower retriever with a label aware supervised contrastive objective that shapes a robust global semantic space. In Stage 2, we mine hard samples via ANN and re-annotate them with the policy aligned LLM, and introduce a multi-class extension of circle loss that explicitly sharpens similarity boundaries between relevance levels, to further refine and enrich the embedding space. Robustness is additionally improved through additive spelling augmentation and synthetic query generation. Extensive offline evaluations and production A/B tests show that our framework improves retrieval relevance and delivers statistically significant gains in engagement and business impact.