Back

Drivers of bias in diagnostic test accuracy estimates when using expert panels as a reference standard

Kellerhuis, B. E.; Jenniskens, K.; Schuit, E.; Hooft, L.; Moons, C.; Reitsma, J. B.

2023-07-28 epidemiology
10.1101/2023.07.26.23293187 medRxiv
Show abstract

ObjectivesTo assess the impact of study and expert panel characteristics on index test diagnostic accuracy estimates. Study Design and SettingSimulations were performed in which an expert panel was used as reference standard to estimate the sensitivity and specificity of an index diagnostic test. Diagnostic accuracy was determined by combining probability estimates of target condition presence, as provided by experts using four component reference tests, through a predefined threshold. Study and panel characteristics were varied in several scenarios: target condition prevalence (20%, 40%, 50%), accuracy of component reference tests (70%, 80%, mixed), expert panel size (2, 3, 10), study population size (360, 1000), and random or systematic differences between experts probability estimates. Bias in accuracy estimates across all possible true index test values was quantified for all scenarios. The total bias in each scenario was quantified using the mean squared error (MSE). ResultsWhen estimating an index test with 80% sensitivity and 70% specificity, bias in these estimates was hardly affected by the study population size or the number of experts. When one expert was systematically biased, bias in sensitivity and specificity estimates increased, but this effect lessened when the number of experts in the panel was higher. Prevalence had a large effect on bias, scenarios with a prevalence of 0.5 estimated sensitivity between 63.3% and 76.7% and specificity between 56.1% and 68.7%, whereas scenarios with a prevalence of 0.2 estimated sensitivity between 48.5% and 73.3% and specificity between 65.5% and 68.7%. Random and systematic differences between experts also increased bias, with estimated sensitivity between 48.6% and 77.4% and specificity between 59.1% and 69.1% as opposed to scenarios without random or systematic differences, which estimated sensitivity between 58.0% and 77.4% and specificity between 56.1% and 69.1%. More accurate component reference tests also reduced bias. Scenarios with four component tests of 80% sensitivity and specificity estimated index test sensitivity between 60.1% and 77.4% and specificity between 62.9% and 69.1%, whereas scenarios with four component tests of 70% sensitivity and specificity estimated index test sensitivity between 48.5% and 73.4% and specificity between 56.1% and 67.0%. ConclusionBias in accuracy estimates when using an expert panel will increase if the component reference tests (combined) are less accurate. Prevalence, the true value of the index test accuracy, and random or systematic differences between experts can also impact the amount of bias, but the amount and even direction will vary between scenarios.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
BMC Medical Research Methodology
43 papers in training set
Top 0.1%
23.7%
2
PLOS ONE
4510 papers in training set
Top 17%
10.6%
3
BMJ Open
554 papers in training set
Top 3%
6.7%
4
BMC Infectious Diseases
118 papers in training set
Top 0.4%
5.1%
5
BMC Public Health
147 papers in training set
Top 2%
2.7%
6
Scientific Reports
3102 papers in training set
Top 46%
2.6%
50% of probability mass above
7
Diagnostics
48 papers in training set
Top 0.7%
2.2%
8
American Journal of Epidemiology
57 papers in training set
Top 0.5%
2.2%
9
Epidemiology and Infection
84 papers in training set
Top 1%
2.0%
10
Journal of Clinical Epidemiology
28 papers in training set
Top 0.2%
2.0%
11
BMC Medicine
163 papers in training set
Top 3%
1.8%
12
JMIR Public Health and Surveillance
45 papers in training set
Top 2%
1.8%
13
Epidemiology
26 papers in training set
Top 0.2%
1.8%
14
International Journal of Epidemiology
74 papers in training set
Top 1%
1.8%
15
JMIRx Med
31 papers in training set
Top 1%
1.3%
16
Journal of Medical Internet Research
85 papers in training set
Top 3%
1.3%
17
Frontiers in Public Health
140 papers in training set
Top 6%
1.3%
18
Frontiers in Medicine
113 papers in training set
Top 4%
1.3%
19
PLOS Global Public Health
293 papers in training set
Top 5%
0.9%
20
JMIR Medical Informatics
17 papers in training set
Top 1%
0.8%
21
Healthcare
16 papers in training set
Top 2%
0.8%
22
Journal of the American Medical Informatics Association
61 papers in training set
Top 2%
0.8%
23
Human Mutation
29 papers in training set
Top 0.7%
0.8%
24
Human Vaccines & Immunotherapeutics
25 papers in training set
Top 0.7%
0.8%
25
Journal of Clinical Medicine
91 papers in training set
Top 6%
0.8%
26
Eurosurveillance
80 papers in training set
Top 1%
0.8%
27
Statistics in Medicine
34 papers in training set
Top 0.4%
0.5%
28
BMJ
49 papers in training set
Top 1%
0.5%
29
Archives of Public Health
12 papers in training set
Top 0.9%
0.5%
30
International Journal of Medical Informatics
25 papers in training set
Top 2%
0.5%