SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems

2026-02-27Robotics

RoboticsArtificial Intelligence
AI summary

The authors address the problem that current robotic task planners struggle with safety and adapting to new safety rules. They create SafeGen-LLM, a large language model trained in two steps to better understand and follow safety constraints in task planning. Their model is tested using a special benchmark with clear safety rules across different types of tasks. Experiments show SafeGen-LLM improves safety while working well in new situations compared to existing methods.

Safety-critical task planningRobotic systemsReinforcement LearningLarge Language ModelsPDDL3Supervised Fine-TuningPolicy OptimizationFormal verificationCurriculum learning
Authors
Jialiang Fan, Weizhe Xu, Mengyu Liu, Oleg Sokolsky, Insup Lee, Fangxin Kong
Abstract
Safety-critical task planning in robotic systems remains challenging: classical planners suffer from poor scalability, Reinforcement Learning (RL)-based methods generalize poorly, and base Large Language Models (LLMs) cannot guarantee safety. To address this gap, we propose safety-generalizable large language models, named SafeGen-LLM. SafeGen-LLM can not only enhance the safety satisfaction of task plans but also generalize well to novel safety properties in various domains. We first construct a multi-domain Planning Domain Definition Language 3 (PDDL3) benchmark with explicit safety constraints. Then, we introduce a two-stage post-training framework: Supervised Fine-Tuning (SFT) on a constraint-compliant planning dataset to learn planning syntax and semantics, and Group Relative Policy Optimization (GRPO) guided by fine-grained reward machines derived from formal verification to enforce safety alignment and by curriculum learning to better handle complex tasks. Extensive experiments show that SafeGen-LLM achieves strong safety generalization and outperforms frontier proprietary baselines across multi-domain planning tasks and multiple input formats (e.g., PDDLs and natural language).