ORCE: Order-Aware Alignment of Verbalized Confidence in Large Language Models
2026-05-12 • Machine Learning
Machine LearningComputation and Language
AI summaryⓘ
The authors study how large language models can say how sure they are about their answers in natural language. They point out that previous methods mixed making answers with stating confidence, which sometimes lowered answer quality. Their new approach first makes an answer, then separately estimates confidence based on that fixed answer, improving how well confidence reflects correctness without hurting the answer. They tested this on difficult tasks and showed better alignment between confidence and correctness while keeping answer accuracy mostly unchanged.
Large Language ModelsConfidence EstimationVerbalized ConfidenceCalibrationReinforcement LearningAnswer AccuracyConfidence AlignmentSampling-based SurrogateRank-based Optimization
Authors
Chen Li, Xiaoling Hu, Songzhu Zheng, Jiawei Zhou, Chao Chen
Abstract
Large language models (LLMs) often produce answers with high certainty even when they are incorrect, making reliable confidence estimation essential for deployment in real-world scenarios. Verbalized confidence, where models explicitly state their confidence in natural language, provides a flexible and user-facing uncertainty signal that can be applied even when token logits are unavailable. However, existing verbalized-confidence methods often optimize answer generation and confidence generation jointly, which can cause confidence-alignment objectives to interfere with answer accuracy. In this work, we propose a decoupled and order-aware framework for verbalized confidence calibration. Our method first generates an answer and then estimates confidence conditioned on the fixed question--answer pair, allowing confidence optimization without directly perturbing the answer-generation process. To align confidence with correctness likelihood, we construct a sampling-based surrogate from multiple model completions and optimize rank-based reinforcement learning objectives that encourage responses with higher estimated correctness likelihood to receive higher verbalized confidence. Experiments on reasoning and knowledge-intensive benchmarks show that our method improves calibration and failure prediction performance while largely preserving answer accuracy. These results demonstrate that verbalized confidence can be more reliably aligned by decoupling confidence estimation from answer generation and optimizing the relative ordering of confidence across responses.