Theory and interpretability of Quantum Extreme Learning Machines: a Pauli-transfer matrix approach

2026-02-20Machine Learning

Machine Learning
AI summary

The authors study a type of quantum machine learning model called quantum extreme learning machines (QELMs), which use quantum systems to process data but don't have memory like other models. They use a mathematical tool called the Pauli transfer matrix to understand how data encoding, quantum system behavior, and measurements impact the QELM's ability to learn. This approach shows that the way data is encoded sets the features the model can learn from, while the quantum system changes these features in predictable ways before measuring. By viewing training as a decoding task, the authors suggest ways to design QELMs for specific learning goals. They demonstrate this by showing a QELM can learn to approximate the behavior of complex nonlinear systems.

Quantum reservoir computersQuantum extreme learning machinesPauli transfer matrixQuantum channelsData encodingTemporal multiplexingNonlinear dynamical systemsSurrogate modelMachine learning
Authors
Markus Gross, Hans-Martin Rieser
Abstract
Quantum reservoir computers (QRCs) have emerged as a promising approach to quantum machine learning, since they utilize the natural dynamics of quantum systems for data processing and are simple to train. Here, we consider n-qubit quantum extreme learning machines (QELMs) with continuous-time reservoir dynamics. QELMs are memoryless QRCs capable of various ML tasks, including image classification and time series forecasting. We apply the Pauli transfer matrix (PTM) formalism to theoretically analyze the influence of encoding, reservoir dynamics, and measurement operations, including temporal multiplexing, on the QELM performance. This formalism makes explicit that the encoding determines the complete set of (nonlinear) features available to the QELM, while the quantum channels linearly transform these features before they are probed by the chosen measurement operators. Optimizing a QELM can therefore be cast as a decoding problem in which one shapes the channel-induced transformations such that task-relevant features become available to the regressor. The PTM formalism allows one to identify the classical representation of a QELM and thereby guide its design towards a given training objective. As a specific application, we focus on learning nonlinear dynamical systems and show that a QELM trained on such trajectories learns a surrogate-approximation to the underlying flow map.