Temporal Credit Is Free
2026-03-30 • Machine Learning
Machine Learning
AI summaryⓘ
The authors show that recurrent neural networks can learn from data over time without needing complex backward calculations called Jacobian propagation. They explain that the network’s current state already contains enough information to update itself using simple immediate changes. By avoiding old, confusing memory traces and adjusting how gradients are scaled, they improve learning efficiency. They also provide a rule to know when this gradient scaling is necessary. Their approach works well across different network designs, brain data from primates, and real-time machine learning tasks, using much less memory.
Recurrent Neural NetworksJacobian PropagationHidden StateGradient NormalizationRMSpropTemporal Credit AssignmentOnline LearningReal-Time Recurrent Learning (RTRL)Streaming Machine LearningNonlinear State Update
Authors
Aur Shalev Merin
Abstract
Recurrent networks do not need Jacobian propagation to adapt online. The hidden state already carries temporal credit through the forward pass; immediate derivatives suffice if you stop corrupting them with stale trace memory and normalize gradient scales across parameter groups. An architectural rule predicts when normalization is needed: \b{eta}2 is required when gradients must pass through a nonlinear state update with no output bypass, and unnecessary otherwise. Across ten architectures, real primate neural data, and streaming ML benchmarks, immediate derivatives with RMSprop match or exceed full RTRL, scaling to n = 1024 at 1000x less memory.