ClawGym: A Scalable Framework for Building Effective Claw Agents

2026-04-29Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors introduce ClawGym, a new system that helps develop and test personal agents working step-by-step with local files and tools. They created ClawGym-SynData, a large and varied dataset of tasks based on user intents and skills, paired with realistic simulated work environments and ways to check if tasks are done correctly. They trained several agent models called ClawGym-Agents on this data using supervised learning and also tried reinforcement learning with a method that runs many task tests in parallel. To measure performance, they built ClawGym-Bench, a benchmark of 200 tasks reviewed by both humans and AI. The authors plan to release these tools publicly for other researchers to use.

Claw-style environmentspersonal agentssupervised fine-tuningreinforcement learningbenchmarktask synthesisworkspace simulationagent trainingevaluationblack-box rollout
Authors
Fei Bai, Huatong Song, Shuang Sun, Daixuan Cheng, Yike Yang, Chuan Hao, Renyuan Li, Feng Chang, Yuan Wei, Ran Tao, Bryan Dai, Jian Yang, Wayne Xin Zhao
Abstract
Claw-style environments support multi-step workflows over local files, tools, and persistent workspace states. However, scalable development around these environments remains constrained by the absence of a systematic framework, especially one for synthesizing verifiable training data and integrating it with agent training and diagnostic evaluation. To address this challenge, we present ClawGym, a scalable framework that supports the full lifecycle of Claw-style personal agent development. Concretely, we construct ClawGym-SynData, a diverse dataset of 13.5K filtered tasks synthesized from persona-driven intents and skill-grounded operations, paired with realistic mock workspaces and hybrid verification mechanisms. We then train a family of capable Claw-style models, termed ClawGym-Agents, through supervised fine-tuning on black-box rollout trajectories, and further explore reinforcement learning via a lightweight pipeline that parallelizes rollouts across per-task sandboxes.To support reliable evaluation, we further construct ClawGym-Bench, a benchmark of 200 instances calibrated through automated filtering and human-LLM review. Relevant resources will be soon released at https://github.com/ClawGym.