Runtime Anomaly Detection and Assurance Framework for AI-Driven Nurse Call Systems
Liu, Y.; Concepcion, D.
Show abstract
This research proposes an anomaly detection and assurance framework. It is mainly aimed at providing a framework for anomaly detection and assurance in AI-driven Nurse Call Systems (NCS) during operation. This study detects abnormal behaviors through simulating real call logs, injecting controllable anomalies, and using a lightweight Isolation Forest model. The final visualization results are presented through an interactive dashboard. Our research focuses mainly on the medical environment, which has characteristics of being delay-sensitive and safety-critical. A distinctive feature of this research is that it can effectively enhance the reliability of system operation without relying on complex deep model proprietary data, while maintaining safety and interpretability. The framework design emphasizes reproducibility while maintaining low computational overhead. The purpose is to enable rapid deployment of this framework on resource-constrained edge devices. Preliminary experimental results show that this method can maintain a reasonable precision rate. Additionally, when detecting delay-type anomalies, the results indicate a high recall rate. Moreover, to reflect the systems performance in real scenarios, the framework detects delay metrics and hourly alarm quantity metrics, and reports Precision-Recall curves and their confidence intervals. Future work will consider introducing time, context features, and explainability analysis modules. The aim is to improve the models accuracy and further meet the medical industrys requirements for auditability. This work focuses on the operational safety and reliability of AI-enabled Nurse Call Systems, addressing runtime failure modes that are underrepresented in current healthcare AI deployments. Rather than proposing new learning models, the contribution lies in a reproducible, interpretable assurance framework suitable for real clinical infrastructure. To ensure transparency and reproducibility, all code, cleaned datasets, experiment scripts, and an interactive Streamlit demo--allowing users to upload their own CSVs -- are publicly released as open research artifacts (Zenodo DOI: 10.5281/zenodo.17767143).
Matching journals
The top 8 journals account for 50% of the predicted probability mass.