Back

Silence is golden, by my measures still see: why cheap-but-noisy outcome measures can be more cost effective than gold standards.

Woolf, B.; Pedder, H.; Rodriguez-Broadbent, H.; Edwards, P.

2022-05-19 epidemiology
10.1101/2022.05.17.22274839 medRxiv
Show abstract

ObjectiveTo assess the cost-effectiveness of using cheap-but-noisy outcome measures, such as a short and simple questionnaire. BackgroundTo detect associations reliably, studies must avoid bias and random error. To reduce random error, we can increase the size of the study and increase the accuracy of the outcome measurement process. However, with fixed resources there is a trade-off between the number of participants a study can enrol and the amount of information that can be collected on each participant during data collection. MethodTo consider the effect on measurement error of using outcome scales with varying numbers of categories we define and calculate the Variance from Categorisation that would be expected from using a category midpoint; define the analytic conditions under-which such a measure is cost-effective; use meta-regression to estimate the impact of participant burden, defined as questionnaire length, on response rates; and develop an interactive web-app to allow researchers to explore the cost-effectiveness of using such a measure under plausible assumptions. ResultsCompared with no measurement, only having a few categories greatly reduced the Variance from Categorization. For example, scales with five categories reduce the variance by 96% for a uniform distribution. We additionally show that a simple measure will be more cost effective than a gold-standard measure if the relative increase in variance due to using it is less than the relative increase in cost from the gold standard, assuming it does not introduce bias in the measurement. We found an inverse power law relationship between participant burden and response rates such that a doubling the burden on participants reduces the response rate by around one third. Finally, we created an interactive web-app (https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/) to allow exploration of when using a cheap-but-noisy measure will be more cost-effective using realistic parameter. ConclusionCheap-but-noisy questionnaires containing just a few questions can be a cost effect way of maximising power. However, their use requires a judgment on the trade-off between the potential increase in risk information bias and the reduction in the potential of selection bias due to the expected higher response rates. Key Messages- A cheap-but-noisy outcome measure, like a short form questionnaire, is a more cost-effective method of maximising power than an error free gold standard when the percentage increase in noise from using the cheap-but-noisy measure is less than the relative difference in the cost of administering the two alternatives. - We have created an R-shiny app to facilitate the exploration of when this condition is met at https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/ - Cheap-but-noisy outcome measures are more likely to introduce information bias than a gold standard, but may reduce selection bias because they reduce loss-to-follow-up. Researchers therefore need to form a judgement about the relative increase or decrease in bias before using a cheap-but-noisy measure. - We would encourage the development and validation of short form questionnaires to enable the use of high quality cheap-but-noisy outcome measures in randomised controlled trials.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
BMC Medical Research Methodology
43 papers in training set
Top 0.1%
33.1%
2
PLOS ONE
4510 papers in training set
Top 22%
8.4%
3
Trials
25 papers in training set
Top 0.1%
8.4%
4
BMJ Open
554 papers in training set
Top 4%
4.9%
50% of probability mass above
5
Journal of Clinical Epidemiology
28 papers in training set
Top 0.1%
4.9%
6
Epidemiology
26 papers in training set
Top 0.1%
4.9%
7
Research Synthesis Methods
20 papers in training set
Top 0.1%
3.6%
8
International Journal of Epidemiology
74 papers in training set
Top 0.8%
2.9%
9
Journal of Public Health
23 papers in training set
Top 0.4%
1.7%
10
BMC Medicine
163 papers in training set
Top 3%
1.7%
11
BMC Research Notes
29 papers in training set
Top 0.2%
1.5%
12
European Journal of Epidemiology
40 papers in training set
Top 0.4%
1.3%
13
Systematic Reviews
11 papers in training set
Top 0.3%
1.2%
14
JMIR Research Protocols
18 papers in training set
Top 1.0%
1.2%
15
Statistics in Medicine
34 papers in training set
Top 0.3%
0.9%
16
BMJ Global Health
98 papers in training set
Top 3%
0.7%
17
JMIR mHealth and uHealth
10 papers in training set
Top 0.4%
0.7%
18
BMC Public Health
147 papers in training set
Top 6%
0.7%
19
Journal of Medical Internet Research
85 papers in training set
Top 5%
0.7%
20
Pilot and Feasibility Studies
12 papers in training set
Top 0.7%
0.6%
21
Scientific Reports
3102 papers in training set
Top 78%
0.6%
22
Royal Society Open Science
193 papers in training set
Top 6%
0.6%