Back

Apparent RSV-COVID interference is not robust to adjustment for shared testing propensity

Steier, J.

2025-12-30 epidemiology
10.64898/2025.12.30.25343230 medRxiv
Show abstract

BackgroundViral interference, in which infection by one pathogen reduces susceptibility to another at the population level, may shape respiratory virus dynamics. Inference from surveillance data is complicated by time-varying testing behavior that can induce correlated detection patterns without any biological interaction. MethodsI developed a two-pathogen renewal model augmented with a ratio penalty that constrains interference estimates to be consistent with observed log-odds ratios of pathogen positivity. The penalty treats other-pathogen positives as implicit controls for shared testing propensity, adapting test-negative design logic to aggregate surveillance. I applied the model to US NAAT surveillance data reported to NREVSS (RSV and COVID-19; October 2020 to February 2026), validated parameter recovery in synthetic experiments, and quantified uncertainty via block bootstrap. I note at the outset that the method is conservative by design: synthetic experiments confirm a bias toward null interference estimates, so near-zero findings should not be read as proof that interference is absent. ResultsWithout the ratio penalty, estimated interference was |{theta}|sum = 0.0082 for RSV [->]COVID. With the penalty, this decreased to 0.0016 (80% reduction). Bootstrap 95% intervals included zero for all direction xlag combinations. Synthetic validation confirmed high specificity at{theta} = 0 but revealed that the method cannot recover moderate interference ({theta} [≤] 0.05), because virus-specific transmissibility deviations absorb the interference signal during Stage 1 estimation. A diagnostic decomposition showed that the ratio penalty term amplifies this bias-to-null: at{theta} = 0.01 in real data, the ratio penalty contributes a -314,000 log-joint penalty, roughly 130 times the multinomial penalty alone. Two-stage estimation was justified empirically; joint MAP estimation failed to converge across all tested configurations. ConclusionsThe ratio penalty functions as a conservative diagnostic screen with high specificity but limited sensitivity. When applied to RSV-COVID surveillance, it substantially reduces interference point estimates, with confidence intervals spanning zero. These results indicate that apparent interference signals in these data are not robust to this particular adjustment, but the methods known conservative bias means biological interference cannot be excluded. The approach is best understood as a sensitivity analysis rather than a definitive test. Author SummaryWhen one respiratory virus circulates widely, it may temporarily suppress transmission of others, a phenomenon called viral interference. Detecting interference from disease surveillance data is difficult because testing behavior changes over time: when any respiratory illness surges, more people seek tests, potentially creating correlated patterns that mimic biological interaction. I developed a statistical method to probe this confounding. Borrowing logic from vaccine studies, the method penalizes the model when its predictions diverge from the observed ratio of positive tests across pathogens. The idea is that this ratio should be stable if testing propensity fluctuates but affects all pathogens similarly. Applied to five years of US surveillance data for RSV and COVID-19, this penalty reduced apparent interference by 80%, with statistical uncertainty intervals including zero. Crucially, the method is intentionally conservative: simulation experiments show it also diminishes real interference signals, because transmissibility parameters absorb the interference effect before it can be estimated. My near-zero estimates therefore do not prove interference is absent; rather, they indicate that apparent signals in these data are not robust to this particular adjustment for testing composition. This work highlights that surveillance-based interference estimates may be sensitive to testing artifacts and provides one approach for assessing this sensitivity.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Epidemiology
26 papers in training set
Top 0.1%
23.4%
2
PLOS Computational Biology
1633 papers in training set
Top 2%
12.8%
3
PLOS ONE
4510 papers in training set
Top 21%
8.7%
4
American Journal of Epidemiology
57 papers in training set
Top 0.1%
8.7%
50% of probability mass above
5
Epidemics
104 papers in training set
Top 0.2%
7.1%
6
PLOS Biology
408 papers in training set
Top 5%
2.7%
7
BMC Medicine
163 papers in training set
Top 2%
2.5%
8
PeerJ
261 papers in training set
Top 4%
2.2%
9
Scientific Reports
3102 papers in training set
Top 49%
2.2%
10
Statistics in Medicine
34 papers in training set
Top 0.2%
1.8%
11
Journal of The Royal Society Interface
189 papers in training set
Top 2%
1.8%
12
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 31%
1.8%
13
Wellcome Open Research
57 papers in training set
Top 0.8%
1.8%
14
eLife
5422 papers in training set
Top 44%
1.5%
15
BMC Medical Research Methodology
43 papers in training set
Top 0.8%
1.3%
16
Methods in Ecology and Evolution
160 papers in training set
Top 2%
1.0%
17
mSystems
361 papers in training set
Top 7%
0.8%
18
BMC Public Health
147 papers in training set
Top 6%
0.7%
19
JAIDS Journal of Acquired Immune Deficiency Syndromes
19 papers in training set
Top 0.4%
0.7%
20
Bioinformatics
1061 papers in training set
Top 10%
0.7%
21
JAMA Network Open
127 papers in training set
Top 5%
0.7%
22
BMJ Open
554 papers in training set
Top 14%
0.5%
23
Clinical Infectious Diseases
231 papers in training set
Top 5%
0.5%
24
Patterns
70 papers in training set
Top 3%
0.5%
25
International Journal of Infectious Diseases
126 papers in training set
Top 4%
0.5%
26
Nature Communications
4913 papers in training set
Top 66%
0.5%
27
BMC Infectious Diseases
118 papers in training set
Top 6%
0.5%