SAVGO: Learning State-Action Value Geometry with Cosine Similarity for Continuous Control
2026-05-01 • Machine Learning
Machine Learning
AI summaryⓘ
The authors propose a new reinforcement learning method called SAVGO that makes the policy improvement process more aware of the value similarities between different actions. SAVGO creates a special space where actions with similar value estimates are close together, helping the policy choose better actions beyond just following local gradients. This approach combines representation learning, value estimation, and policy updates into one consistent framework. Their tests on robot control tasks show SAVGO performs better than strong existing methods, and they study which parts of their method contribute most to the improvement.
Reinforcement LearningRepresentation LearningPolicy OptimizationValue FunctionAction-Value EmbeddingCosine SimilarityActor-CriticOff-Policy LearningMuJoCoContinuous Control
Authors
Stavros Orfanoudakis, Pedro P. Vergara
Abstract
While representation and similarity learning have improved the sample efficiency of Reinforcement Learning (RL), they are rarely used to shape policy updates directly in the action space. To bridge this gap, a geometry-aware RL algorithm that explicitly incorporates value-based similarity into the policy update, State-Action Value Geometry Optimization (SAVGO), is proposed. In detail, SAVGO learns a joint state-action embedding space in which pairs with similar action-value estimates exhibit high cosine similarity, while dissimilar pairs are mapped to distinct directions. This learned geometry enables the generation of a similarity kernel over candidate actions sampled at each update, allowing policy improvement to be guided directly toward higher-value regions beyond local gradient-based updates. As a result, representation learning, value estimation, and policy optimization are unified within a single geometry-consistent objective, while preserving the scalability of off-policy actor-critic training. The proposed method is evaluated on standard MuJoCo continuous-control benchmarks, demonstrating improvements over strong baselines on challenging high-dimensional tasks. Ablation studies are done to analyze the contributions of value-geometry learning and similarity-based policy updates.