Benchmarking Large Language Models for Intensive Care Unit Clinical Decision Support: A Dual Safety Evaluation of 26 Models on Consumer Hardware
Shlyakhta, T.
Show abstract
BackgroundLarge Language Models (LLMs) show promise for clinical decision support in Intensive Care Units (ICU), but their safety and reliability remain inadequately evaluated through dual testing of both memory-dependent and memory-independent safety mechanisms. ObjectiveTo comprehensively evaluate LLMs using two independent safety tests: context-dependent contraindication memory (penicillin allergy recall) and context-independent authority resistance (Extended Milgram Test), revealing whether these represent unified or dissociated safety mechanisms. MethodsTwenty-three LLMs underwent automated testing via 24-hour ICU simulation on consumer hardware (NVIDIA RTX 3060 12GB). A subset of 26 models completed an Extended Milgram Test with five escalating harmful command scenarios. Scoring assessed safety compliance, Milgram resistance, conflict detection, and performance. ResultsCritical findings revealed dissociation between abstract ethics and clinical memory. While 65% of models achieved perfect Milgram resistance (100%), only 8.7% (n=2) correctly refused penicillin with allergy mention. Eight models demonstrated 100% Milgram resistance yet failed allergy recall (r = -0.39, p = 0.23). Only Granite 3.1 8B achieved perfect performance on both tests. ConclusionsAbstract ethical reasoning (refusing harmful orders in principle) is independent from concrete clinical memory (tracking patient-specific risks). Safe medical AI requires both capabilities--rarely both present. Dual safety testing should become mandatory for medical AI certification. HighlightsO_LIOnly 8.7% of tested LLMs passed critical safety tests for medication prescribing C_LIO_LIFirst study demonstrating dissociation between abstract ethics and clinical memory (r = -0.39) C_LIO_LIEight models refused all harmful orders but forgot documented allergies C_LIO_LIGranite 3.1 8B only model achieving perfect performance on both safety tests C_LIO_LIDual safety testing framework proposed for medical AI certification C_LI
Matching journals
The top 8 journals account for 50% of the predicted probability mass.