Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents

2026-04-07Artificial Intelligence

Artificial Intelligence
AI summary

The authors created Claw-Eval, a new testing system for AI agents that perform complex tasks in real software settings. Unlike older tests that only check final results, Claw-Eval tracks every step an agent takes and evaluates safety and reliability more thoroughly. They tested 14 top AI models and found many hidden problems missed by previous methods, especially in safety and consistency. They also discovered that performance varies a lot across different types of tasks, like videos versus images. Their work helps guide improvements to make AI agents not just smart, but safe and dependable too.

large language modelsautonomous agentsmulti-step workflowstrajectory-aware gradingsafety evaluationrobustnessmultimodal perceptionbenchmarkingerror injection
Authors
Bowen Ye, Rang Li, Qibin Yang, Yuanxin Liu, Linli Yao, Hanglong Lv, Zhihui Xie, Chenxin An, Lei Li, Lingpeng Kong, Qi Liu, Zhifang Sui, Tong Yang
Abstract
Large language models are increasingly deployed as autonomous agents executing multi-step workflows in real-world software environments. However, existing agent benchmarks suffer from three critical limitations: (1) trajectory-opaque grading that checks only final outputs, (2) underspecified safety and robustness evaluation, and (3) narrow modality coverage and interaction paradigms. We introduce Claw-Eval, an end-to-end evaluation suite addressing all three gaps. It comprises 300 human-verified tasks spanning 9 categories across three groups (general service orchestration, multimodal perception and generation, and multi-turn professional dialogue). Every agent action is recorded through three independent evidence channels (execution traces, audit logs, and environment snapshots), enabling trajectory-aware grading over 2,159 fine-grained rubric items. The scoring protocol evaluates Completion, Safety, and Robustness, reporting Average Score, Pass@k, and Pass^k across three trials to distinguish genuine capability from lucky outcomes. Experiments on 14 frontier models reveal that: (1) trajectory-opaque evaluation is systematically unreliable, missing 44% of safety violations and 13% of robustness failures that our hybrid pipeline catches; (2) controlled error injection primarily degrades consistency rather than peak capability, with Pass^3 dropping up to 24% while Pass@3 remains stable; (3) multimodal performance varies sharply, with most models performing poorer on video than on document or image, and no single model dominating across all modalities. Beyond benchmarking, Claw-Eval highlights actionable directions for agent development, shedding light on what it takes to build agents that are not only capable but reliably deployable.