ROSE: An Intent-Centered Evaluation Metric for NL2SQL

2026-04-14Databases

DatabasesArtificial Intelligence
AI summary

The authors point out that the usual way to check if a computer correctly turns questions into SQL queries (called Execution Accuracy) has problems because it can be tricked by small changes in wording or mistakes in the 'correct' answers it compares to. To fix this, they created a new method called ROSE that focuses on whether the SQL query actually answers the question's intent, not just if it matches the given answer. ROSE uses a two-step process where one part checks if the SQL makes sense alone, and the other part compares it with the official answer to improve judgment. The authors found that ROSE agrees with human experts much better than older methods and tested it on many systems, providing tools to help future research.

Natural Language to SQL (NL2SQL)Execution AccuracyROSE metricSemantic correctnessSQL ProverAdversarial RefuterCohen's KappaGround-truth SQLIntent-centered evaluationValidation dataset
Authors
Wenqi Pei, Shizheng Hou, Boyan Li, Han Chen, Zhichao Shi, Yuyu Luo
Abstract
Execution Accuracy (EX), the widely used metric for evaluating the effectiveness of Natural Language to SQL (NL2SQL) solutions, is becoming increasingly unreliable. It is sensitive to syntactic variation, ignores that questions may admit multiple interpretations, and is easily misled by erroneous ground-truth SQL. To address this, we introduce ROSE, an intent-centered metric that focuses on whether the predicted SQL answers the question, rather than consistency with the ground-truth SQL under the reference-dependent paradigm. ROSE employs an adversarial Prover-Refuter cascade: SQL Prover assesses the semantic correctness of a predicted SQL against the user's intent independently, while Adversarial Refuter uses the ground-truth SQL as evidence to challenge and refine this judgment. On our expert-aligned validation set ROSE-VEC, ROSE achieves the best agreement with human experts, outperforming the next-best metric by nearly 24% in Cohen's Kappa. We also conduct a largescale re-evaluation of 19 NL2SQL methods, revealing four valuable insights. We release ROSE and ROSE-VEC to facilitate more reliable NL2SQL research.