Back

Reasoning Over Pre-training: Evaluating LLM Performance and Augmentation in Women's Health

Imprialou, M.; Kaltsas, N.; Oliinyk, V.; Vigrass, T.; Schwarzmann, J.; Rosenthal, R.; Glastonbury, C.; Wigley, C.; Gillam, M.; Kanani, N.; Supramaniam, P.; Granne, I.; Lindgren, C. M.

2025-05-23 obstetrics and gynecology
10.1101/2025.05.22.25328162 medRxiv
Show abstract

Recent advances in large language models (LLMs) show promise in clinical applications, but their performance in womens health remains underexamined 1. We evaluated LLMs on 2,337 questions from obstetrics and gynaecology, including 1,392 from the Royal College of Obstetricians and Gynaecologists Part 2 examination (MRCOG Part 2) 2, a UK-based test of advanced clinical decision-making, and 945 from MedQA3, a dataset derived from the United States Medical Licensing Examination (USMLE). The best-performing model--OpenAIs o1-preview4 enhanced with retrieval-augmented generation (RAG)5,6--achieved 72.00% accuracy on MRCOG Part 2 and 92.30% on MedQA, exceeding prior benchmarks by 21.6%1. General-purpose reasoning models outperformed domain-specific fine-tuned models such as MED-LM7. We also analyse performance by clinical subdomain and discover lower accuracy in areas like fetal medicine and postpartum care. These findings highlight the importance of reasoning capabilities over domain-specific fine-tuning and demonstrate the value of augmentation methods like RAG for improving accuracy and interpretability8.

Matching journals

The top 2 journals account for 50% of the predicted probability mass.