Back

The accuracy of large language models in labelling neurosurgical 'case-control studies and risk of bias assessment: protocol for a study of interrater agreement with human reviewers.

Igoli, J.; Osunronbi, T.; Olukoya, O.; Daniel, J. O. I.; Alemenzohu, H.; Kanu, A.; Kihunyu, A. M.; Okeleke, E.; Oyoyo, H.; Shekoni, O.; Jesuyajolu, D.; Alalade, A. F.

2024-08-12 surgery
10.1101/2024.08.11.24311830
Show abstract

IntroductionAccurate identification of study designs and risk of bias (RoB) assessment is crucial for evidence synthesis in research. However, mislabelling of case-control studies (CCS) is prevalent, leading to a downgraded quality of evidence. Large Language Models (LLMs), a form of artificial intelligence, have shown impressive performance in various medical tasks. Still, their utility and application in categorising study designs and assessing RoB needs to be further explored. This study will evaluate the performance of four publicly available LLMs (ChatGPT-3.5, ChatGPT-4, Claude 3 Sonnet, Claude 3 Opus) in accurately identifying CCS designs from the neurosurgical literature. Secondly, we will assess the human-LLM interrater agreement for RoB assessment of true CCS. MethodsWe identified thirty-four top-ranking neurosurgical-focused journals and searched them on PubMed/MEDLINE for manuscripts reported as CCS in the title/abstract. Human reviewers will independently assess study designs and RoB using the Newcastle-Ottawa Scale. The methods sections/full-text articles will be provided to LLMs to determine study designs and assess RoB. Cohens kappa will be used to evaluate human-human, human-LLM and LLM-LLM interrater agreement. Logistic regression will be used to assess study characteristics affecting performance. A p-value < 0.05 at a 95% confidence interval will be considered statistically significant. ConclusionIf the human-LLM agreement is high, LLMs could become valuable teaching and quality assurance tools for critical appraisal in neurosurgery and other medical fields. This study will contribute to validating LLMs for specialised scientific tasks in evidence synthesis. This could lead to reduced review costs, faster completion, standardisation, and minimal errors in evidence synthesis.

Matching journals

1
PLOS ONE
Public Library of Science (PLoS) · based on 1737 published papers
Top 14%
1.8× avg
2
BMJ Open
BMJ · based on 553 published papers
Top 7%
3.7× avg
3
Trials
Springer Science and Business Media LLC · based on 24 published papers
Top 0.2%
54× avg
4
Research Synthesis Methods
Wiley · based on 17 published papers
Top 0.2%
100× avg
5
Systematic Reviews
Springer Science and Business Media LLC · based on 11 published papers
#1
102× avg
6
British Journal of Anaesthesia
Elsevier BV · based on 13 published papers
Top 0.5%
35× avg
7
Journal of Clinical Epidemiology
Elsevier BV · based on 29 published papers
Top 0.5%
24× avg
8
Human Brain Mapping
Wiley · based on 53 published papers
Top 2%
9.7× avg
9
BMC Neurology
Springer Science and Business Media LLC · based on 11 published papers
Top 1.0%
25× avg
10
BMC Medical Informatics and Decision Making
Springer Science and Business Media LLC · based on 36 published papers
Top 5%
5.4× avg
11
Scientific Reports
Springer Science and Business Media LLC · based on 701 published papers
Top 69%
1.8%
12
Journal of Clinical Medicine
MDPI AG · based on 77 published papers
Top 8%
3.0× avg
13
Scientific Data
Springer Science and Business Media LLC · based on 30 published papers
Top 2%
9.8× avg
14
Biology Methods and Protocols
Oxford University Press (OUP) · based on 19 published papers
Top 0.8%
13× avg
15
npj Digital Medicine
Springer Science and Business Media LLC · based on 85 published papers
Top 9%
1.9× avg
16
F1000Research
F1000 Research Ltd · based on 28 published papers
Top 3%
7.2× avg
17
Heliyon
Elsevier BV · based on 57 published papers
Top 9%
2.6× avg
18
JAMIA Open
Oxford University Press (OUP) · based on 35 published papers
Top 5%
2.6× avg
19
Frontiers in Medicine
Frontiers Media SA · based on 99 published papers
Top 19%
0.7%