AI summaryⓘ
The authors study a problem where different choices (actions) are linked to some unknown factors, and picking one choice can give clues about many factors at once, connected by a network. Unlike earlier research assuming all choices are always available, they focus on a realistic setting where only some choices can be made at any time, which changes every round. They propose a new method called UCB-LP-A that smartly decides which choices to make by solving a math problem to handle these changing options and still learn well. Their method comes with a mathematical guarantee on performance and works better than other approaches in simulations. This helps in situations like social networks where people aren’t always present but still influence each other.
multi-armed banditstochastic availabilityside-observationsbipartite graphexploration-exploitationlinear programmingregret boundactivation setnetwork structureUCB algorithm
Authors
Ashutosh Soni, Peizhong Ju, Atilla Eryilmaz, Ness B. Shroff
Abstract
We study the stochastic multi-armed bandit (MAB) problem where an underlying network structure enables side-observations across related actions. We use a bipartite graph to link actions to a set of unknowns, such that selecting an action reveals observations for all the unknowns it is connected to. While previous works rely on the assumption that all actions are permanently accessible, we investigate the more practical setting of stochastic availability, where the set of feasible actions (the "activation set") varies dynamically in each round. This framework models real-world systems with both structural dependencies and volatility, such as social networks where users provide side-information about their peers' preferences, yet are not always online to be queried. To address this challenge, we propose UCB-LP-A, a novel policy that leverages a Linear Programming (LP) approach to optimize exploration-exploitation trade-offs under stochastic availability. Unlike standard network bandit algorithms that assume constant access, UCB-LP-A computes an optimal sampling distribution over the realizable activation sets, ensuring that the necessary observations are gathered using only the currently active arms. We derive a theoretical upper bound on the regret of our policy, characterizing the impact of both the network structure and the activation probabilities. Finally, we demonstrate through numerical simulations that UCB-LP-A significantly outperforms existing heuristics that ignore either the side-information or the availability constraints.