PVminer: A Domain-Specific Tool to Detect the Patient Voice in Patient Generated Data

2026-02-24Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors developed a new natural language processing tool called PVminer to better understand what patients say in secure messages with their healthcare providers. This tool looks at both how patients communicate and social factors that affect health, combining these into one system. PVminer uses special language models adapted for patient language and includes topic analysis to improve accuracy. Their method works better than existing models and can handle complex labeling of patient messages, making it easier to analyze large amounts of patient-generated text. They will share their models and code publicly for further research.

patient voicenatural language processingBERTsocial determinants of healthsecure messagingtopic modelingmachine learningmulti-label classification
Authors
Samah Fodeh, Linhai Ma, Yan Wang, Srivani Talakokkul, Ganesh Puthiaraju, Afshan Khan, Ashley Hagaman, Sarah Lowe, Aimee Roundtree
Abstract
Patient-generated text such as secure messages, surveys, and interviews contains rich expressions of the patient voice (PV), reflecting communicative behaviors and social determinants of health (SDoH). Traditional qualitative coding frameworks are labor intensive and do not scale to large volumes of patient-authored messages across health systems. Existing machine learning (ML) and natural language processing (NLP) approaches provide partial solutions but often treat patient-centered communication (PCC) and SDoH as separate tasks or rely on models not well suited to patient-facing language. We introduce PVminer, a domain-adapted NLP framework for structuring patient voice in secure patient-provider communication. PVminer formulates PV detection as a multi-label, multi-class prediction task integrating patient-specific BERT encoders (PV-BERT-base and PV-BERT-large), unsupervised topic modeling for thematic augmentation (PV-Topic-BERT), and fine-tuned classifiers for Code, Subcode, and Combo-level labels. Topic representations are incorporated during fine-tuning and inference to enrich semantic inputs. PVminer achieves strong performance across hierarchical tasks and outperforms biomedical and clinical pre-trained baselines, achieving F1 scores of 82.25% (Code), 80.14% (Subcode), and up to 77.87% (Combo). An ablation study further shows that author identity and topic-based augmentation each contribute meaningful gains. Pre-trained models, source code, and documentation will be publicly released, with annotated datasets available upon request for research use.