Scaling Recurrence-aware Foundation Models for Clinical Records via Next-Visit Prediction

2026-03-25Machine Learning

Machine Learning
AI summary

The authors developed RAVEN, a new method for predicting the next healthcare visit details using large amounts of patient data from electronic health records (EHRs). Their model learns to generate events for a patient's upcoming visit based on their medical history, while addressing issues like repeated events that can mislead evaluation. They found that simply making the model bigger doesn't always help unless more data is also added. RAVEN performs well in predicting diseases without extra training and can adapt to different patient groups, even when some data is missing or simplified.

electronic health records (EHRs)pretrainingautoregressive modelingnext-visit predictionregularizationfoundation modelszero-shot predictionTransformer modelsclinical code mappingsscaling behavior
Authors
Haresh Rengaraj Rajamohan, Xiang Gao, Weicheng Zhu, Shih-Lun Huang, Long Chen, Gabe Schulman, Huizhen Jin, Shengduo Li, Yixuan Wang, Huidi Yang, Kyunghyun Cho, Cem M. Deniz, Narges Razavian
Abstract
While large-scale pretraining has revolutionized language modeling, its potential remains underexplored in healthcare with structured electronic health records (EHRs). We present RAVEN, a novel generative pretraining strategy for sequential EHR data based on Recurrence-Aware next-Visit EveNt prediction. Leveraging a dataset of over one million unique individuals, our model learns to autoregressively generate tokenized clinical events for the next visit conditioned on patient history. We introduce regularization on predicting repeated events and highlight a key pitfall in EHR-based foundation model evaluations: repeated event tokens can inflate performance metrics when new onsets are not distinguished from subsequent occurrences. Furthermore, we empirically investigate the scaling behaviors in a data-constrained, compute-saturated regime, showing that simply increasing model size is suboptimal without commensurate increases in data volume. We evaluate our model via zero-shot prediction for forecasting the incidence of a diverse set of diseases, where it rivals fully fine-tuned representation-based Transformer models and outperforms widely used simulation-based next-token approaches. Finally, without additional parameter updates, we show that RAVEN can generalize to an external patient cohort under lossy clinical code mappings and feature coverage gaps.