Learning Robust Markov Models for Safe Runtime Monitoring
2026-02-16 • Logic in Computer Science
Logic in Computer Science
AI summaryⓘ
The authors developed a way to create safety checkers for autonomous systems that watch how the system behaves and predict possible problems before they happen. They do this by first learning a special type of model called an interval Hidden Markov Model (iHMM), which captures the system's uncertain behavior. Their method includes new steps to make sure these models get better over time and a quick way to use them to estimate risks. Tests show their model-based approach works better than methods that only look at past data without any system understanding.
runtime monitorautonomous systemssafety assuranceinterval Hidden Markov Modelstochastic behaviorrisk estimationconformance testingmodel-based monitoringmodel-free monitoringconvergence guarantees
Authors
Antonina Skurka, Luko van der Maas, Sebastian Junges, Hazem Torfah
Abstract
We present a model-based approach to learning robust runtime monitors for autonomous systems. Runtime monitors play a crucial role in raising the level of assurance by observing system behavior and predicting potential safety violations. In our approach, we propose to capture a system's (stochastic) behavior using interval Hidden Markov Models (iHMMs). The monitor then uses this learned iHMM to derive risk estimates for potential safety violations. The paper makes three key contributions: (1) it provides a formalization of the problem of learning robust runtime monitors, (2) introduces a novel framework that uses conformance-testing-based refinement for learning robust iHMMs with convergence guarantees, and (3) presents an efficient monitoring algorithm for computing risk estimates over iHMMs. Our empirical results demonstrate the efficacy of monitors learned using our approach, particularly when compared to model-free monitoring approaches that rely solely on collected data without access to a system model.