TSN-Affinity: Similarity-Driven Parameter Reuse for Continual Offline Reinforcement Learning
2026-04-28 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how to teach a computer to learn multiple tasks over time from offline data without forgetting old tasks, a setting called continual offline reinforcement learning (CORL). They note current methods either use lots of memory or have problems mixing old and new knowledge. Their approach, TSN-Affinity, uses small, task-specific parts of a model combined with a way to share knowledge based on how similar tasks are. Tested on video games and robot tasks, their method keeps past skills well and improves learning new tasks. They suggest their strategy is a promising alternative to methods that rely on storing and replaying old data.
Continual Offline Reinforcement LearningCatastrophic ForgettingReplay-based LearningArchitectural Continual LearningTinySubNetworksDecision TransformerTask-specific ParameterizationAction CompatibilityMulti-task LearningFranka Emika Panda
Authors
Dominik Żurek, Kamil Faber, Marcin Pietron, Paweł Gajewski, Roberto Corizzo
Abstract
Continual offline reinforcement learning (CORL) aims to learn a sequence of tasks from datasets collected over time while preserving performance on previously learned tasks. This setting corresponds to domains where new tasks arise over time, but adapting the model in live environment interactions is expensive, risky, or impossible. However, CORL inherits the dual difficulty of offline reinforcement learning and adapting while preventing catastrophic forgetting. Replay-based continual learning approaches remain a strong baseline but incur memory overhead and suffer from a distribution mismatch between replayed samples and newly learned policies. At the same time, architectural continual learning methods have shown strong potential in supervised learning but remain underexplored in CORL. In this work, we propose TSN-Affinity, a novel CORL method based on TinySubNetworks and Decision Transformer. The method enables task-specific parameterization and controlled knowledge sharing through a RL-aware reuse strategy that routes tasks according to action compatibility and latent similarity. We evaluate the approach on benchmarks based on Atari games and simulations of manipulation tasks with the Franka Emika Panda robotic arm, covering both discrete and continuous control. Results show strong retention from sparse SubNetworks, with routing further improving multi-task performance. Our findings suggest that similarity-guided architectural reuse is a strong and viable alternative to replay-based strategies in a CORL setting. Our code is available at: https://github.com/anonymized-for-submission123/tsn-affinity.