Knowledge-Embedded Latent Projection for Robust Representation Learning
2026-02-18 • Machine Learning
Machine Learning
AI summaryⓘ
The authors address the problem of analyzing large, uneven data matrices from electronic health records, where the number of features far exceeds the number of patients. They propose a new method that uses existing semantic information about medical concepts to guide the learning of simpler data representations. By combining mathematical tools like kernel principal component analysis and projected gradient descent, they make the method both accurate and efficient. Their approach also comes with proven guarantees on how well it works and they show its usefulness with simulations and real EHR data.
latent space modelselectronic health records (EHR)semantic embeddingsreproducing kernel Hilbert spacekernel principal component analysisprojected gradient descentnon-convex optimizationrepresentation learningestimation error bounds
Authors
Weijing Tang, Ming Yuan, Zongqi Xia, Tianxi Cai
Abstract
Latent space models are widely used for analyzing high-dimensional discrete data matrices, such as patient-feature matrices in electronic health records (EHRs), by capturing complex dependence structures through low-dimensional embeddings. However, estimation becomes challenging in the imbalanced regime, where one matrix dimension is much larger than the other. In EHR applications, cohort sizes are often limited by disease prevalence or data availability, whereas the feature space remains extremely large due to the breadth of medical coding system. Motivated by the increasing availability of external semantic embeddings, such as pre-trained embeddings of clinical concepts in EHRs, we propose a knowledge-embedded latent projection model that leverages semantic side information to regularize representation learning. Specifically, we model column embeddings as smooth functions of semantic embeddings via a mapping in a reproducing kernel Hilbert space. We develop a computationally efficient two-step estimation procedure that combines semantically guided subspace construction via kernel principal component analysis with scalable projected gradient descent. We establish estimation error bounds that characterize the trade-off between statistical error and approximation error induced by the kernel projection. Furthermore, we provide local convergence guarantees for our non-convex optimization procedure. Extensive simulation studies and a real-world EHR application demonstrate the effectiveness of the proposed method.