EnsembleSHAP: Faithful and Certifiably Robust Attribution for Random Subspace Method
2026-03-31 • Cryptography and Security
Cryptography and Security
AI summaryⓘ
The authors study a method called random subspace used to make AI systems more secure against attacks. They point out that current tools to explain how these methods work are slow and not very safe. Their new method, EnsembleSHAP, is faster, keeps important explanation qualities, and protects privacy better. They also tested it against various attacks and showed it works well. This is the first time anyone has shown explanations that are provably resistant to certain kinds of attacks.
random subspace methodfeature attributionShapley valueLIMEadversarial attacksbackdoor attacksjailbreak attacksprivacy-preserving attacksexplanation robustnessEnsembleSHAP
Authors
Yanting Wang, Jinyuan Jia
Abstract
Random subspace method has wide security applications such as providing certified defenses against adversarial and backdoor attacks, and building robustly aligned LLM against jailbreaking attacks. However, the explanation of random subspace method lacks sufficient exploration. Existing state-of-the-art feature attribution methods, such as Shapley value and LIME, are computationally impractical and lacks security guarantee when applied to random subspace method. In this work, we propose EnsembleSHAP, an intrinsically faithful and secure feature attribution for random subspace method that reuses its computational byproducts. Specifically, our feature attribution method is 1) computationally efficient, 2) maintains essential properties of effective feature attribution (such as local accuracy), and 3) offers guaranteed protection against privacy-preserving attacks on feature attribution methods. To the best of our knowledge, this is the first work to establish provable robustness against explanation-preserving attacks. We also perform comprehensive evaluations for our explanation's effectiveness when faced with different empirical attacks, including backdoor attacks, adversarial attacks, and jailbreak attacks. The code is at https://github.com/Wang-Yanting/EnsembleSHAP. WARNING: This document may include content that could be considered harmful.