Claw-Eval-Live: A Live Agent Benchmark for Evolving Real-World Workflows

2026-04-30Software Engineering

Software EngineeringArtificial Intelligence
AI summary

The authors created Claw-Eval-Live, a new benchmark to test how well AI agents can complete real-world workflows across various software and tasks. Unlike older tests that only check final answers, this benchmark tracks the entire workflow and can be updated over time to match changing work demands. Their tests showed that even the best AI agents still struggle to fully automate complex tasks, especially in business and management areas. The authors emphasize the need for evaluation methods that consider both current real-world demand and clear evidence of what the AI actually did.

LLM agentsworkflow automationbenchmarkexecution tracesbusiness servicesworkspace repairdeterministic checksLLM judgingtask evaluationworkflow-demand signals
Authors
Chenxin Li, Zhengyang Tang, Huangxin Lin, Yunlong Lin, Shijue Huang, Shengyuan Liu, Bowen Ye, Rang Li, Lei Li, Benyou Wang, Yixuan Yuan
Abstract
LLM agents are expected to complete end-to-end units of work across software tools, business services, and local workspaces. Yet many agent benchmarks freeze a curated task set at release time and grade mainly the final response, making it difficult to evaluate agents against evolving workflow demand or verify whether a task was executed. We introduce Claw-Eval-Live, a live benchmark for workflow agents that separates a refreshable signal layer, updated across releases from public workflow-demand signals, from a reproducible, time-stamped release snapshot. Each release is constructed from public workflow-demand signals, with ClawHub Top-500 skills used in the current release, and materialized as controlled tasks with fixed fixtures, services, workspaces, and graders. For grading, Claw-Eval-Live records execution traces, audit logs, service state, and post-run workspace artifacts, using deterministic checks when evidence is sufficient and structured LLM judging only for semantic dimensions. The release contains 105 tasks spanning controlled business services and local workspace repair, and evaluates 13 frontier models under a shared public pass rule. Experiments reveal that reliable workflow automation remains far from solved: the leading model passes only 66.7% of tasks and no model reaches 70%. Failures are structured by task family and execution surface, with HR, management, and multi-system business workflows as persistent bottlenecks and local workspace repair comparatively easier but unsaturated. Leaderboard rank alone is insufficient because models with similar pass rates can diverge in overall completion, and task-level discrimination concentrates in a middle band of tasks. Claw-Eval-Live suggests that workflow-agent evaluation should be grounded twice, in fresh external demand and in verifiable agent action.