Cold-Start Personalization via Training-Free Priors from Structured World Models

2026-02-16Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors address how to figure out a new user's preferences when no past data exists, which is tricky because people only care about a few factors from many possibilities. They show that training a model offline to learn patterns in preferences and then using Bayesian reasoning online helps ask smarter questions and better guess full preferences. Their method, Pep, outperforms standard reinforcement learning by asking fewer questions, adapting more to user answers, and using much less computational power. This means Pep makes cold-start personalization more efficient by leveraging the structure in preference data.

cold-start personalizationpreference elicitationreinforcement learningBayesian inferenceoffline learningmulti-turn interactionuser modelingstructured datapreference correlationbelief models
Authors
Avinandan Bose, Shuyue Stella Li, Faeze Brahman, Pang Wei Koh, Simon Shaolei Du, Yulia Tsvetkov, Maryam Fazel, Lin Xiao, Asli Celikyilmaz
Abstract
Cold-start personalization requires inferring user preferences through interaction when no user-specific historical data is available. The core challenge is a routing problem: each task admits dozens of preference dimensions, yet individual users care about only a few, and which ones matter depends on who is asking. With a limited question budget, asking without structure will miss the dimensions that matter. Reinforcement learning is the natural formulation, but in multi-turn settings its terminal reward fails to exploit the factored, per-criterion structure of preference data, and in practice learned policies collapse to static question sequences that ignore user responses. We propose decomposing cold-start elicitation into offline structure learning and online Bayesian inference. Pep (Preference Elicitation with Priors) learns a structured world model of preference correlations offline from complete profiles, then performs training-free Bayesian inference online to select informative questions and predict complete preference profiles, including dimensions never asked about. The framework is modular across downstream solvers and requires only simple belief models. Across medical, mathematical, social, and commonsense reasoning, Pep achieves 80.8% alignment between generated responses and users' stated preferences versus 68.5% for RL, with 3-5x fewer interactions. When two users give different answers to the same question, Pep changes its follow-up 39-62% of the time versus 0-28% for RL. It does so with ~10K parameters versus 8B for RL, showing that the bottleneck in cold-start elicitation is the capability to exploit the factored structure of preference data.