Unified Policy Value Decomposition for Rapid Adaptation
2026-03-18 • Machine Learning
Machine Learning
AI summaryⓘ
The authors present a method to help reinforcement learning agents quickly adapt to new tasks by sharing a simple, low-dimensional representation called a goal embedding. Instead of relearning everything, the agent uses a set of fixed building blocks for policies and values, combined with this goal embedding to handle new tasks immediately. They tested their approach on a robot simulation making it walk in different directions, showing it can generalize well to new directions by mixing learned components. This approach is inspired by how some brain neurons modulate responses and could make learning more efficient in complex systems.
reinforcement learningpolicy functionvalue functiongoal embeddingbilinear actor-criticSoft Actor-CriticMuJoCo Ant environmentSuccessor Featuresgain modulationzero-shot adaptation
Authors
Cristiano Capone, Luca Falorsi, Andrea Ciardiello, Luca Manneschi
Abstract
Rapid adaptation in complex control systems remains a central challenge in reinforcement learning. We introduce a framework in which policy and value functions share a low-dimensional coefficient vector - a goal embedding - that captures task identity and enables immediate adaptation to novel tasks without retraining representations. During pretraining, we jointly learn structured value bases and compatible policy bases through a bilinear actor-critic decomposition. The critic factorizes as Q = sum_k G_k(g) y_k(s,a), where G_k(g) is a goal-conditioned coefficient vector and y_k(s,a) are learned value basis functions. This multiplicative gating - where a context signal scales a set of state-dependent bases - is reminiscent of gain modulation observed in Layer 5 pyramidal neurons, where top-down inputs modulate the gain of sensory-driven responses without altering their tuning. Building on Successor Features, we extend the decomposition to the actor, which composes a set of primitive policies weighted by the same coefficients G_k(g). At test time the bases are frozen and G_k(g) is estimated zero-shot via a single forward pass, enabling immediate adaptation to novel tasks without any gradient update. We train a Soft Actor-Critic agent on the MuJoCo Ant environment under a multi-directional locomotion objective, requiring the agent to walk in eight directions specified as continuous goal vectors. The bilinear structure allows each policy head to specialize to a subset of directions, while the shared coefficient layer generalizes across them, accommodating novel directions by interpolating in goal embedding space. Our results suggest that shared low-dimensional goal embeddings offer a general mechanism for rapid, structured adaptation in high-dimensional control, and highlight a potentially biologically plausible principle for efficient transfer in complex reinforcement learning systems.