SpeechParaling-Bench: A Comprehensive Benchmark for Paralinguistic-Aware Speech Generation

2026-04-22Computation and Language

Computation and LanguageArtificial IntelligenceSound
AI summary

The authors created SpeechParaling-Bench, a new test set to better evaluate how well big speech models understand and generate subtle voice cues like tone and emotion. Their benchmark covers more than 100 detailed voice features and uses over 1,000 paired English and Chinese speech samples. They also developed a smart way to compare model outputs by judging which response is better, rather than giving absolute scores, reducing bias and the need for human reviewers. Tests showed current models still have big trouble controlling and adapting these voice features, causing many errors in conversations. This work highlights that speech models need to improve at capturing these subtle signals for more natural interactions.

Paralinguistic cuesSpeech generationLarge Audio-Language ModelsBenchmarkingVoice modulationEvaluation methodsContext-aware adaptationPairwise comparisonHuman-computer interactionSituational dialogue
Authors
Ruohan Liu, Shukang Yin, Tao Wang, Dong Zhang, Weiji Zhuang, Shuhuai Ren, Ran He, Caifeng Shan, Chaoyou Fu
Abstract
Paralinguistic cues are essential for natural human-computer interaction, yet their evaluation in Large Audio-Language Models (LALMs) remains limited by coarse feature coverage and the inherent subjectivity of assessment. To address these challenges, we introduce SpeechParaling-Bench, a comprehensive benchmark for paralinguistic-aware speech generation. It expands existing coverage from fewer than 50 to over 100 fine-grained features, supported by more than 1,000 English-Chinese parallel speech queries, and is organized into three progressively challenging tasks: fine-grained control, intra-utterance variation, and context-aware adaptation. To enable reliable evaluation, we further develop a pairwise comparison pipeline, in which candidate responses are evaluated against a fixed baseline by an LALM-based judge. By framing evaluation as relative preference rather than absolute scoring, this approach mitigates subjectivity and yields more stable and scalable assessments without costly human annotation. Extensive experiments reveal substantial limitations in current LALMs. Even leading proprietary models struggle with comprehensive static control and dynamic modulation of paralinguistic features, while failure to correctly interpret paralinguistic cues accounts for 43.3% of errors in situational dialogue. These findings underscore the need for more robust paralinguistic modeling toward human-aligned voice assistants.