End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions
2026-03-24 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study a type of reinforcement learning where the rules follow a certain linear pattern called linear Bellman completeness. They create a new algorithm that works efficiently when the system's transitions are deterministic but rewards and starting points are random. Their method can handle both small and very large sets of possible actions, needing only a standard way to pick the best action when there are many choices. The algorithm learns a policy that is close to the best possible using a number of samples and computations that grows reasonably with the problem size and desired accuracy.
Reinforcement LearningMarkov Decision ProcessLinear Function ApproximationBellman CompletenessDeterministic TransitionsStochastic RewardsSample ComplexityComputational EfficiencyArgmax OraclePolicy Learning
Authors
Zakaria Mhammedi, Alexander Rakhlin, Nneka Okolo
Abstract
We study reinforcement learning (RL) with linear function approximation in Markov Decision Processes (MDPs) satisfying \emph{linear Bellman completeness} -- a fundamental setting where the Bellman backup of any linear value function remains linear. While statistically tractable, prior computationally efficient algorithms are either limited to small action spaces or require strong oracle assumptions over the feature space. We provide a computationally efficient algorithm for linear Bellman complete MDPs with \emph{deterministic transitions}, stochastic initial states, and stochastic rewards. For finite action spaces, our algorithm is end-to-end efficient; for large or infinite action spaces, we require only a standard argmax oracle over actions. Our algorithm learns an $\varepsilon$-optimal policy with sample and computational complexity polynomial in the horizon, feature dimension, and $1/\varepsilon$.