Safe Continual Reinforcement Learning in Non-stationary Environments

2026-04-21Machine Learning

Machine Learning
AI summary

The authors study how reinforcement learning (RL) can be used to create controllers that keep systems safe while adapting to changes over time. They point out that most RL methods assume the system stays the same and don’t handle changing conditions well, especially when safety is critical. By testing different approaches on new benchmark tasks, they find it’s hard to both maintain safety and remember past knowledge during changes. They explore methods that reduce this conflict and discuss future challenges in making controllers that learn safely and reliably in changing environments.

Reinforcement LearningSafe RLContinual LearningNon-stationary DynamicsCatastrophic ForgettingControl SystemsRegularizationSafety ConstraintsAdaptive ControlBenchmark Environments
Authors
Austin Coursey, Abel Diaz-Gonzalez, Marcos Quinones-Grueiro, Gautam Biswas
Abstract
Reinforcement learning (RL) offers a compelling data-driven paradigm for synthesizing controllers for complex systems when accurate physical models are unavailable; however, most existing control-oriented RL methods assume stationarity and, therefore, struggle in real-world non-stationary deployments where system dynamics and operating conditions can change unexpectedly. Moreover, RL controllers acting in physical environments must satisfy safety constraints throughout their learning and execution phases, rendering transient violations during adaptation unacceptable. Although continual RL and safe RL have each addressed non-stationarity and safety, respectively, their intersection remains comparatively unexplored, motivating the study of safe continual RL algorithms that can adapt over the system's lifetime while preserving safety. In this work, we systematically investigate safe continual reinforcement learning by introducing three benchmark environments that capture safety-critical continual adaptation and by evaluating representative approaches from safe RL, continual RL, and their combinations. Our empirical results reveal a fundamental tension between maintaining safety constraints and preventing catastrophic forgetting under non-stationary dynamics, with existing methods generally failing to achieve both objectives simultaneously. To address this shortcoming, we examine regularization-based strategies that partially mitigate this trade-off and characterize their benefits and limitations. Finally, we outline key open challenges and research directions toward developing safe, resilient learning-based controllers capable of sustained autonomous operation in changing environments.