Quantum Interval Bound Propagation for Certified Training of Quantum Neural Networks
2026-05-01 • Machine Learning
Machine Learning
AI summaryⓘ
The authors introduce a new method called quantum interval bound propagation (QIBP) to make quantum machine learning models more reliable against adversarial attacks, which are small changes in input designed to trick the model. QIBP tracks bounds on predictions during training to guarantee the model's output won't change wrongly under such attacks. They test two ways to do this tracking (interval and affine arithmetic) and show that their method produces models with strong, trustworthy decision boundaries. This extends a classical technique called interval bound propagation into the quantum domain. The paper provides certified training routines that improve the robustness of quantum models.
quantum machine learninginterval bound propagationadversarial perturbationscertified traininginterval arithmeticaffine arithmeticrobustnessquantum modelsdecision boundariesadversarial robustness
Authors
Emma Andrews, Nahyeon Kim, Prabhat Mishra
Abstract
Quantum machine learning is a promising field for efficiently learning features of a dataset to perform a specified task, such as classification. Interval bound propagation (IBP) is a popular certified training method in classical machine learning, where the lower and upper bounds are tracked throughout the model. These bounds are used during training to ensure that the model is certified to predict the correct label even under adversarial perturbations. While IBP is successful in classical domain, there are limited certified training efforts in quantum domain. In this paper, we present quantum interval bound propagation (QIBP) to establish a certified training routine for quantum machine learning, certifying the accuracy of models under adversarial perturbations. We implement QIBP using both interval and affine arithmetic to explore the tradeoffs between the two implementations in terms of accuracy and other design considerations. Extensive evaluation demonstrates that the resulting certified trained models have robust decision boundaries, guaranteed to predict the correct class for the samples within the trained adversarial robustness bounds.