Back

Distinguishing GPT-4-generated Radiology Abstracts from Original Abstracts: Performance of Blinded Human Observers and AI Content Detector

Ufuk, F.; Peker, H.; Sagtas, E.; Yagci, A. B.

2023-05-03 radiology and imaging
10.1101/2023.04.28.23289283 medRxiv
Show abstract

ObjectiveTo determine GPT-4s effectiveness in writing scientific radiology article abstracts and investigate human reviewers and AI Content detectors success in distinguishing these abstracts. Additionally, to determine the similarity scores of abstracts generated by GPT-4 to better understand its ability to create unique text. MethodsThe study collected 250 original articles published between 2021 and 2023 in five radiology journals. The articles were randomly selected, and their abstracts were generated by GPT-4 using a specific prompt. Three experienced academic radiologists independently evaluated the GPT-4 generated and original abstracts to distinguish them as original or generated by GPT-4. All abstracts were also uploaded to an AI Content Detector and plagiarism detector to calculate similarity scores. Statistical analysis was performed to determine discrimination performance and similarity scores. ResultsOut of 134 GPT-4 generated abstracts, average of 75 (56%) were detected by reviewers, and average of 50 (43%) original abstracts were falsely categorized as GPT-4 generated abstracts by reviewers. The sensitivity, specificity, accuracy, PPV, and NPV of observers in distinguishing GPT-4 written abstracts ranged from 51.5% to 55.6%, 56.1% to 70%, 54.8% to 60.8%, 41.2% to 76.7%, and 47% to 62.7%, respectively. No significant difference was observed between observers in discrimination performance. ConclusionGPT-4 can generate convincing scientific radiology article abstracts. However, human reviewers and AI Content detectors have difficulty in distinguishing GPT-4 generated abstracts from original ones.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
European Radiology
14 papers in training set
Top 0.1%
34.1%
2
Scientific Reports
3102 papers in training set
Top 5%
10.4%
3
Annals of Translational Medicine
17 papers in training set
Top 0.1%
8.7%
50% of probability mass above
4
Medicine
30 papers in training set
Top 0.2%
6.6%
5
PLOS ONE
4510 papers in training set
Top 30%
5.0%
6
Diagnostics
48 papers in training set
Top 0.3%
4.3%
7
Medical Physics
14 papers in training set
Top 0.2%
3.8%
8
Frontiers in Medicine
113 papers in training set
Top 2%
3.0%
9
Frontiers in Oncology
95 papers in training set
Top 2%
2.4%
10
npj Precision Oncology
48 papers in training set
Top 0.5%
1.7%
11
Archives of Clinical and Biomedical Research
28 papers in training set
Top 1%
1.3%
12
Ultrasound in Medicine & Biology
10 papers in training set
Top 0.3%
1.3%
13
Nature Communications
4913 papers in training set
Top 58%
1.0%
14
Stroke: Vascular and Interventional Neurology
13 papers in training set
Top 0.3%
0.9%
15
PLOS Digital Health
91 papers in training set
Top 3%
0.8%
16
Informatics in Medicine Unlocked
21 papers in training set
Top 1%
0.7%
17
PeerJ
261 papers in training set
Top 17%
0.7%
18
eBioMedicine
130 papers in training set
Top 5%
0.7%
19
IEEE Access
31 papers in training set
Top 1%
0.5%
20
JCO Clinical Cancer Informatics
18 papers in training set
Top 1%
0.5%
21
JAMA Network Open
127 papers in training set
Top 5%
0.5%
22
Journal of Medical Imaging
11 papers in training set
Top 0.4%
0.5%
23
Computers in Biology and Medicine
120 papers in training set
Top 6%
0.5%