PREF-XAI: Preference-Based Personalized Rule Explanations of Black-Box Machine Learning Models
2026-04-21 • Machine Learning
Machine Learning
AI summaryⓘ
The authors discuss a new way to make AI explanations that fit what each user wants and understands. Instead of giving one fixed explanation, their method shows different options and learns which ones the user likes best by asking them to rank a few examples. This helps the system pick better explanations over time and even find new ways to explain things the user hadn't thought of. Their approach combines rule-based explanations with learning users' preferences through ranking and math. Experiments showed this method works well with real data to personalize explanations.
Explainable Artificial Intelligence (XAI)Preference LearningOrdinal RegressionRule-Based ExplanationsUser-Centric ExplanationsBlack-Box ModelsAdditive Utility FunctionInteractive SystemsPersonalizationPreference-Based Decision Making
Authors
Salvatore Greco, Jacek Karolczak, Roman Słowiński, Jerzy Stefanowski
Abstract
Explainable artificial intelligence (XAI) has predominantly focused on generating model-centric explanations that approximate the behavior of black-box models. However, such explanations often overlook a fundamental aspect of interpretability: different users require different explanations depending on their goals, preferences, and cognitive constraints. Although recent work has explored user-centric and personalized explanations, most existing approaches rely on heuristic adaptations or implicit user modeling, lacking a principled framework for representing and learning individual preferences. In this paper, we consider Preference-Based Explainable Artificial Intelligence (PREF-XAI), a novel perspective that reframes explanation as a preference-driven decision problem. Within PREF-XAI, explanations are not treated as fixed outputs, but as alternatives to be evaluated and selected according to user-specific criteria. In the PREF-XAI perspective, here we propose a methodology that combines rule-based explanations with formal preference learning. User preferences are elicited through a ranking of a small set of candidate explanations and modeled via an additive utility function inferred using robust ordinal regression. Experimental results on real-world datasets show that PREF-XAI can accurately reconstruct user preferences from limited feedback, identify highly relevant explanations, and discover novel explanatory rules not initially considered by the user. Beyond the proposed methodology, this work establishes a connection between XAI and preference learning, opening new directions for interactive and adaptive explanation systems.