Back

Optimal Experimental Design for Big Data: Applications in Brain Imaging

Bridgeford, E. W.; Wang, S.; Yang, Z.; Wang, Z.; Xu, T.; Craddock, C.; Kiar, G.; Gray-Roncal, W.; Priebe, C. E.; Caffo, B.; Milham, M.; Zuo, X.-N.; Consortium for Reliability and Reproducibility, ; Vogelstein, J. T.

2019-10-13 neuroscience
10.1101/802629 bioRxiv
Show abstract

Replicability, the ability to replicate scientific findings, is a prerequisite for scientific discovery and clinical utility. Troublingly, we are in the midst of a replicability crisis. A key to replicability is that multiple measurements of the same item (e.g., experimental sample or clinical participant) under fixed experimental constraints are relatively similar to one another. Thus, statistics that quantify the relative contributions of accidental deviations--such as measurement error--as compared to systematic deviations--such as individual differences--are critical. We demonstrate that existing replicability statistics, such as intra-class correlation coefficient and fingerprinting, fail to adequately differentiate between accidental and systematic deviations in very simple settings. We therefore propose a novel statistic, discriminability, which quantifies the degree to which an individuals samples are relatively similar to one another, without restricting the data to be univariate, Gaussian, or even Euclidean. Using this statistic, we introduce the possibility of optimizing experimental design via increasing discriminability and prove that optimizing discriminability improves performance bounds in subsequent inference tasks. In extensive simulated and real datasets (focusing on brain imaging and demonstrating on genomics), only optimizing data discriminability improves performance on all subsequent inference tasks for each dataset. We therefore suggest that designing experiments and analyses to optimize discriminability may be a crucial step in solving the replicability crisis, and more generally, mitigating accidental measurement error. Author SummaryIn recent decades, the size and complexity of data has grown exponentially. Unfortunately, the increased scale of modern datasets brings many new challenges. At present, we are in the midst of a replicability crisis, in which scientific discoveries fail to replicate to new datasets. Difficulties in the measurement procedure and measurement processing pipelines coupled with the influx of complex high-resolution measurements, we believe, are at the core of the replicability crisis. If measurements themselves are not replicable, what hope can we have that we will be able to use the measurements for replicable scientific findings? We introduce the "discriminability" statistic, which quantifies how discriminable measurements are from one another, without limitations on the structure of the underlying measurements. We prove that discriminable strategies tend to be strategies which provide better accuracy on downstream scientific questions. We demonstrate the utility of discriminability over competing approaches in this context on two disparate datasets from both neuroimaging and genomics. Together, we believe these results suggest the value of designing experimental protocols and analysis procedures which optimize the discriminability.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
NeuroImage
813 papers in training set
Top 0.4%
22.5%
2
PLOS Computational Biology
1633 papers in training set
Top 3%
10.4%
3
eneuro
389 papers in training set
Top 1%
6.3%
4
PLOS ONE
4510 papers in training set
Top 34%
4.3%
5
Biostatistics
21 papers in training set
Top 0.1%
3.7%
6
Aperture Neuro
18 papers in training set
Top 0.1%
3.6%
50% of probability mass above
7
Human Brain Mapping
295 papers in training set
Top 2%
3.6%
8
Imaging Neuroscience
242 papers in training set
Top 1%
3.6%
9
Network Neuroscience
116 papers in training set
Top 0.3%
3.6%
10
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 20%
3.6%
11
eLife
5422 papers in training set
Top 31%
2.7%
12
Scientific Reports
3102 papers in training set
Top 46%
2.6%
13
Neuron
282 papers in training set
Top 5%
1.9%
14
Nature Methods
336 papers in training set
Top 4%
1.7%
15
Frontiers in Neuroscience
223 papers in training set
Top 4%
1.5%
16
Nature Computational Science
50 papers in training set
Top 1%
0.9%
17
Journal of Neuroscience Methods
106 papers in training set
Top 1%
0.9%
18
Journal of Neurophysiology
263 papers in training set
Top 0.8%
0.8%
19
Nature Communications
4913 papers in training set
Top 61%
0.8%
20
GigaScience
172 papers in training set
Top 3%
0.8%
21
Frontiers in Neuroinformatics
38 papers in training set
Top 0.7%
0.8%
22
Patterns
70 papers in training set
Top 3%
0.7%
23
Communications Psychology
20 papers in training set
Top 0.3%
0.7%
24
Communications Biology
886 papers in training set
Top 29%
0.6%