Stochastic Resetting Accelerates Policy Convergence in Reinforcement Learning

2026-03-17Machine Learning

Machine Learning
AI summary

The authors studied how a process called stochastic resetting—randomly bringing a system back to a starting point—helps machines learn better. They found that resetting speeds up learning in simple grid tasks and more complex control problems, even when it doesn't make finding goals faster by itself. Unlike usual methods that change how future rewards are valued, resetting just cuts off long, unhelpful attempts, helping useful information spread faster. This shows resetting can be a simple way to improve how machines learn from experience.

Stochastic resettingReinforcement learningFirst-passage timePolicy convergenceTabular grid environmentContinuous controlDeep reinforcement learningValue propagationTemporal discounting
Authors
Jello Zhou, Vudtiwat Ngampruetikorn, David J. Schwab
Abstract
Stochastic resetting, where a dynamical process is intermittently returned to a fixed reference state, has emerged as a powerful mechanism for optimizing first-passage properties. Existing theory largely treats static, non-learning processes. Here we ask how stochastic resetting interacts with reinforcement learning, where the underlying dynamics adapt through experience. In tabular grid environments, we find that resetting accelerates policy convergence even when it does not reduce the search time of a purely diffusive agent, indicating a novel mechanism beyond classical first-passage optimization. In a continuous control task with neural-network-based value approximation, we show that random resetting improves deep reinforcement learning when exploration is difficult and rewards are sparse. Unlike temporal discounting, resetting preserves the optimal policy while accelerating convergence by truncating long, uninformative trajectories to enhance value propagation. Our results establish stochastic resetting as a simple, tunable mechanism for accelerating learning, translating a canonical phenomenon of statistical mechanics into an optimization principle for reinforcement learning.