DARE-bench: Evaluating Modeling and Instruction Fidelity of LLMs in Data Science

2026-02-27Artificial Intelligence

Artificial IntelligenceComputation and Language
AI summary

The authors created DARE-bench to better test how well large language models can follow instructions in data science and machine learning tasks. Unlike other tests, DARE-bench uses tasks with clear correct answers to make evaluations fair and repeatable. It includes thousands of real tasks from Kaggle and also provides data to help models get better at these tasks. Their experiments show that even advanced models have trouble, but training them on DARE-bench tasks significantly improves their accuracy. This work highlights DARE-bench's usefulness for both testing and training AI models in data science.

Large Language ModelsBenchmarkingInstruction followingProcess-aware evaluationMachine learning modelingKaggleFine-tuningSupervised learningReinforcement learning
Authors
Fan Shu, Yite Wang, Ruofan Wu, Boyi Liu, Zhewei Yao, Yuxiong He, Feng Yan
Abstract
The fast-growing demands in using Large Language Models (LLMs) to tackle complex multi-step data science tasks create an emergent need for accurate benchmarking. There are two major gaps in existing benchmarks: (i) the lack of standardized, process-aware evaluation that captures instruction adherence and process fidelity, and (ii) the scarcity of accurately labeled training data. To bridge these gaps, we introduce DARE-bench, a benchmark designed for machine learning modeling and data science instruction following. Unlike many existing benchmarks that rely on human- or model-based judges, all tasks in DARE-bench have verifiable ground truth, ensuring objective and reproducible evaluation. To cover a broad range of tasks and support agentic tools, DARE-bench consists of 6,300 Kaggle-derived tasks and provides both large-scale training data and evaluation sets. Extensive evaluations show that even highly capable models such as gpt-o4-mini struggle to achieve good performance, especially in machine learning modeling tasks. Using DARE-bench training tasks for fine-tuning can substantially improve model performance. For example, supervised fine-tuning boosts Qwen3-32B's accuracy by 1.83x and reinforcement learning boosts Qwen3-4B's accuracy by more than 8x. These significant improvements verify the importance of DARE-bench both as an accurate evaluation benchmark and critical training data.