Back

An E-value-Informed Sensitivity Analysis Framework for Hybrid Controlled Trials

Liu, C.; Mayer, M.; Lactaoen, K.; Gomez, L.; Weissman, G.; Hubbard, R.

2026-03-06 epidemiology
10.64898/2026.03.05.26347653 medRxiv
Show abstract

Hybrid controlled trials (HCTs) incorporate real-world data into randomized controlled trials (RCTs) by augmenting the internal control arm with patients receiving the same treatment in routine care. Beyond increasing power, HCTs may improve recruitment by supporting unequal randomization ratios that increase patient access to experimental treatments. However, HCT validity is threatened by bias from unmeasured confounding due to lack of randomization of external controls, leading to outcome non-exchangeability between internal and external control patients. To address this challenge, we developed a sensitivity analysis framework to assess the robustness of HCT results to potential unmeasured confounding. We propose a tipping point analysis that adapts the E-value framework to the HCT setting where trial participation rather than treatment assignment is subject to confounding. To aid interpretation, we also introduce a data-driven benchmark representing the strength of unmeasured confounding reflected by the observed outcome non-exchangeability. We then propose an operational decision rule and evaluate its performance through simulation studies. Finally, we illustrate the approach using an asthma trial augmented by data from electronic health records. Simulation results demonstrate that our decision rule safeguards against Type I error inflation while preserving the power gains achieved by incorporating external data. In settings where moderate unmeasured confounding led to poorer outcomes for external controls, Type I error was controlled near the nominal 5% level, and power increased by 10-20% compared with analyses using RCT data alone. Our approach provides a practical, interpretable method to assess HCT robustness, supporting rigorous inference when integrating external real-world data.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
Epidemiology
26 papers in training set
Top 0.1%
37.8%
2
PLOS ONE
4510 papers in training set
Top 25%
6.8%
3
PLOS Computational Biology
1633 papers in training set
Top 5%
6.8%
50% of probability mass above
4
Statistics in Medicine
34 papers in training set
Top 0.1%
4.9%
5
BMC Medical Research Methodology
43 papers in training set
Top 0.2%
4.3%
6
Scientific Reports
3102 papers in training set
Top 31%
4.0%
7
npj Digital Medicine
97 papers in training set
Top 1%
3.6%
8
Nature Communications
4913 papers in training set
Top 40%
3.6%
9
Medical Decision Making
10 papers in training set
Top 0.1%
2.9%
10
Clinical Infectious Diseases
231 papers in training set
Top 2%
2.1%
11
Trials
25 papers in training set
Top 0.7%
1.9%
12
International Journal of Epidemiology
74 papers in training set
Top 1%
1.9%
13
Research Synthesis Methods
20 papers in training set
Top 0.1%
1.7%
14
Pharmacoepidemiology and Drug Safety
13 papers in training set
Top 0.2%
1.5%
15
eLife
5422 papers in training set
Top 49%
1.2%
16
Nature Human Behaviour
85 papers in training set
Top 3%
1.0%
17
Journal of The Royal Society Interface
189 papers in training set
Top 4%
1.0%
18
BMC Medicine
163 papers in training set
Top 6%
0.8%
19
Biometrics
22 papers in training set
Top 0.2%
0.8%
20
Cell Reports Medicine
140 papers in training set
Top 8%
0.7%
21
JAMIA Open
37 papers in training set
Top 2%
0.7%
22
American Journal of Epidemiology
57 papers in training set
Top 2%
0.6%