Complex Interpolation of Matrices with an application to Multi-Manifold Learning
2026-04-15 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study what happens when you smoothly combine two special matrices called symmetric positive-definite matrices by interpolating between them. They show that if the size (operator norm) of this combination changes in a perfectly smooth and predictable way, it means the two matrices share a common direction (eigenvector). If the smoothness is only approximate, then the main directions (singular vectors) of the combination are close to those shared directions. These findings help explain and support methods to find common patterns in data from different sources.
symmetric positive-definite matrixeigenvectoroperator norminterpolationlog-linearitysingular vectorsspectral propertiesmultiview datalatent structuresmanifold learning
Authors
Adi Arbel, Stefan Steinerberger, Ronen Talmon
Abstract
Given two symmetric positive-definite matrices $A, B \in \mathbb{R}^{n \times n}$, we study the spectral properties of the interpolation $A^{1-x} B^x$ for $0 \leq x \leq 1$. The presence of `common structures' in $A$ and $B$, eigenvectors pointing in a similar direction, can be investigated using this interpolation perspective. Generically, exact log-linearity of the operator norm $\|A^{1-x} B^x\|$ is equivalent to the existence of a shared eigenvector in the original matrices; stability bounds show that approximate log-linearity forces principal singular vectors to align with leading eigenvectors of both matrices. These results give rise to and provide theoretical justification for a multi-manifold learning framework that identifies common and distinct latent structures in multiview data.