Back to Basics: Revisiting ASR in the Age of Voice Agents
2026-03-26 • Artificial Intelligence
Artificial IntelligenceMultimedia
AI summaryⓘ
The authors created WildASR, a test that checks how well speech recognition systems work using real human voices in four languages, focusing on three challenges: noisy environments, different speaker backgrounds, and language variety. They tested seven popular systems and found that performance drops a lot in tricky situations, and good results in one language or condition don’t guarantee the same elsewhere. They also discovered that some systems make up words when they can’t hear well, which can cause problems for voice assistants. The authors suggest using targeted tests like WildASR to better understand and improve these systems in real life.
Automatic Speech RecognitionMultilingual BenchmarkEnvironmental DegradationDemographic ShiftLinguistic DiversityModel RobustnessSpeech HallucinationDiagnostic EvaluationVoice Agents
Authors
Geeyang Tay, Wentao Ma, Jaewon Lee, Yuzhi Tang, Daniel Lee, Weisu Yin, Dongming Shen, Silin Meng, Yi Zhu, Mu Li, Alex Smola
Abstract
Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.