Back

NHANES-GPT: Large Language Models (LLMs) and the Future of Biostatistics

Titus, A. J.

2023-12-15 health informatics
10.1101/2023.12.13.23299830 medRxiv
Show abstract

BackgroundLarge Language Models (LLMs) like ChatGPT have significant potential in biomedicine and health, particularly in biostatistics, where they can lower barriers to complex data analysis for novices and experts alike. However, concerns regarding data accuracy and model-generated hallucinations necessitate strategies for independent verification. ObjectiveThis study, using NHANES data as a representative case study, demonstrates how ChatGPT can assist clinicians, students, and trained biostatisticians in conducting analyses and illustrates a method to independently verify the information provided by ChatGPT, addressing concerns about data accuracy. MethodsThe study employed ChatGPT to guide the analysis of obesity and diabetes trends in the NHANES dataset from 2005-2006 to 2017-2018. The process included data preparation, logistic regression modeling, and iterative refinement of analyses with confounding variables. Verification of ChatGPTs recommendations was conducted through direct statistical data analysis and cross-referencing with established statistical methodologies. ResultsChatGPT effectively guided the statistical analysis process, simplifying the interpretation of NHANES data. Initial models indicated increasing trends in obesity and diabetes prevalence in the U.S.. Adjusted models, controlling for confounders such as age, gender, and socioeconomic status, provided nuanced insights, confirming the general trends but also highlighting the influence of these factors. ConclusionsChatGPT can facilitate biostatistical analyses in healthcare research, making statistical methods more accessible. The study also underscores the importance of independent verification mechanisms to ensure the accuracy of LLM-assisted analyses. This approach can be pivotal in harnessing the potential of LLMs while maintaining rigorous standards of data accuracy and reliability in biomedical research.

Matching journals

The top 2 journals account for 50% of the predicted probability mass.

1
JAMIA Open
37 papers in training set
Top 0.1%
32.8%
2
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.1%
18.5%
50% of probability mass above
3
BMC Medical Research Methodology
43 papers in training set
Top 0.1%
6.3%
4
Journal of Biomedical Informatics
45 papers in training set
Top 0.3%
4.8%
5
PLOS ONE
4510 papers in training set
Top 40%
3.6%
6
BMJ Open
554 papers in training set
Top 6%
3.6%
7
BMC Medical Informatics and Decision Making
39 papers in training set
Top 0.8%
3.6%
8
JMIR Public Health and Surveillance
45 papers in training set
Top 0.7%
3.6%
9
Journal of Medical Internet Research
85 papers in training set
Top 2%
2.3%
10
PLOS Digital Health
91 papers in training set
Top 1%
1.9%
11
International Journal of Medical Informatics
25 papers in training set
Top 0.9%
1.7%
12
npj Digital Medicine
97 papers in training set
Top 2%
1.7%
13
BMJ Health & Care Informatics
13 papers in training set
Top 0.6%
1.3%
14
JMIR Medical Informatics
17 papers in training set
Top 1%
0.9%
15
The Lancet Digital Health
25 papers in training set
Top 0.9%
0.9%
16
European Journal of Epidemiology
40 papers in training set
Top 0.7%
0.8%
17
JMIRx Med
31 papers in training set
Top 2%
0.7%
18
International Journal of Epidemiology
74 papers in training set
Top 3%
0.7%
19
Heliyon
146 papers in training set
Top 7%
0.7%
20
Pharmacoepidemiology and Drug Safety
13 papers in training set
Top 0.5%
0.7%
21
DIGITAL HEALTH
12 papers in training set
Top 0.7%
0.7%
22
Patterns
70 papers in training set
Top 3%
0.7%
23
Journal of Clinical and Translational Science
11 papers in training set
Top 0.5%
0.6%