Comparing Developer and LLM Biases in Code Evaluation
2026-03-25 • Software Engineering
Software EngineeringComputation and Language
AI summaryⓘ
The authors created a tool called TRACE to test how well large language models (LLMs) can judge code quality compared to human developers. They found that even the best LLM judges are noticeably worse than humans at matching developer preferences. TRACE also uncovered many specific ways these models and humans differ, such as judges favoring longer explanations while humans like shorter ones. This shows that LLMs struggle to fully align with human opinions on code quality in real-world situations.
Large Language ModelsCode EvaluationHuman PreferencesAutomatic Rubric ExtractionInteractive SettingsChat-based ProgrammingIDE AutocompletionCode Quality CriteriaModel AlignmentSoftware Engineering
Authors
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donahue, Ameet Talwalkar, Wayne Chi, Valerie Chen
Abstract
As LLMs are increasingly used as judges in code applications, they should be evaluated in realistic interactive settings that capture partial context and ambiguous intent. We present TRACE (Tool for Rubric Analysis in Code Evaluation), a framework that evaluates LLM judges' ability to predict human preferences and automatically extracts rubric items to reveal systematic biases in how humans and models weigh each item. Across three modalities -- chat-based programming, IDE autocompletion, and instructed code editing -- we use TRACE to measure how well LLM judges align with developer preferences. Among 13 different models, the best judges underperform human annotators by 12-23%. TRACE identifies 35 significant sources of misalignment between humans and judges across interaction modalities, the majority of which correspond to existing software engineering code quality criteria. For example, in chat-based coding, judges are biased towards longer code explanations while humans prefer shorter ones. We find significant misalignment on the majority of existing code quality dimensions, showing alignment gaps between LLM judges and human preference in realistic coding applications.