Dynamic Skill Lifecycle Management for Agentic Reinforcement Learning

2026-05-11Machine Learning

Machine LearningComputation and Language
AI summary

The authors address how AI agents use extra skills to complete complex tasks, pointing out that always keeping or fully learning all skills isn't the best approach. They propose SLIM, a method that smartly chooses which skills to keep, drop, or add based on how useful each skill is over time. Their tests show SLIM improves performance by about 7% and balances learning skills internally with using external ones. This offers a flexible way for AI to manage skills during learning.

large language modelsexternal skillsreinforcement learningskill lifecycle managementpolicy learningmodular skillsleave-one-skill-out validationagentic learningALFWorldSearchQA
Authors
Junhao Shen, Teng Zhang, Xiaoyan Zhao, Hong Cheng
Abstract
Large language model agents increasingly rely on external skills to solve complex tasks, where skills act as modular units that extend their capabilities beyond what parametric memory alone supports. Existing methods assume external skills either accumulate as persistent guidance or internalized into the policy, eventually leading to zero-skill inference. We argue this assumption is overly restrictive, since with limited parametric capacity and uneven marginal contribution across skills, the optimal active skill set is non-monotonic, task- and stage-dependent. In this work, we propose SLIM, a framework of dynamic Skill LIfecycle Management for agentic reinforcement learning (RL), which treats the active external skill set as a dynamic optimization variable jointly updated with policy learning. Specifically, SLIM estimates each active skill's marginal external contribution through leave-one-skill-out validation, then applies three lifecycle operations: retaining high-value skills, retiring skills whose contribution becomes negligible after sufficient exposure, and expanding the skill bank when persistent failures reveal missing capability coverage. Experiments show that SLIM outperforms the best baselines by an average of 7.1% points across ALFWorld and SearchQA. Results further indicate that policy learning and external skill retention are not mutually exclusive: some skills are absorbed into the policy, while others continue to provide external value, supporting SLIM as a more general paradigm for skill-based agentic RL.