Developing and evaluating a chatbot to support maternal health care

2026-03-13Artificial Intelligence

Artificial IntelligenceComputation and LanguageInformation Retrieval
AI summary

The authors developed a phone-based chatbot to provide reliable maternal health advice in India, where users often speak mixed languages and give brief or unclear questions. Their system uses a smart process to identify urgent cases, fetches answers from trusted health guidelines, and generates responses with the help of a large language model. They also created special tests to check the chatbot's accuracy and safety before real use, involving experts and clinicians. The authors found that making such trustworthy chatbots requires careful design and multiple types of evaluation, not just one approach.

maternal healthchatbottriagecode-mixinglarge language modelhealth literacyevidence retrievalevaluation benchmarkemergency recallmultilingual NLP
Authors
Smriti Jha, Vidhi Jain, Jianyu Xu, Grace Liu, Sowmya Ramesh, Jitender Nagpal, Gretchen Chapman, Benjamin Bellows, Siddhartha Goyal, Aarti Singh, Bryan Wilder
Abstract
The ability to provide trustworthy maternal health information using phone-based chatbots can have a significant impact, particularly in low-resource settings where users have low health literacy and limited access to care. However, deploying such systems is technically challenging: user queries are short, underspecified, and code-mixed across languages, answers require regional context-specific grounding, and partial or missing symptom context makes safe routing decisions difficult. We present a chatbot for maternal health in India developed through a partnership between academic researchers, a health tech company, a public health nonprofit, and a hospital. The system combines (1) stage-aware triage, routing high-risk queries to expert templates, (2) hybrid retrieval over curated maternal/newborn guidelines, and (3) evidence-conditioned generation from an LLM. Our core contribution is an evaluation workflow for high-stakes deployment under limited expert supervision. Targeting both component-level and end-to-end testing, we introduce: (i) a labeled triage benchmark (N=150) achieving 86.7% emergency recall, explicitly reporting the missed-emergency vs. over-escalation trade-off; (ii) a synthetic multi-evidence retrieval benchmark (N=100) with chunk-level evidence labels; (iii) LLM-as-judge comparison on real queries (N=781) using clinician-codesigned criteria; and (iv) expert validation. Our findings show that trustworthy medical assistants in multilingual, noisy settings require defense-in-depth design paired with multi-method evaluation, rather than any single model and evaluation method choice.