Back

Epidemiology

Ovid Technologies (Wolters Kluwer Health)

Preprints posted in the last 90 days, ranked by how well they match Epidemiology's content profile, based on 26 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.

1
Covariate adjustment for hierarchical outcomes and the win ratio: how to do it and is it worthwhile?

Hazewinkel, A.-D.; Gregson, J.; Bartlett, J. W.; Gasparyan, S. B.; Wright, D.; Pocock, S.

2026-03-31 cardiovascular medicine 10.64898/2026.03.30.26347966 medRxiv
Top 0.1%
41.2%
Show abstract

Objectives: Introducing a new covariate adjustment method for hierarchical outcomes using ordinal logistic regression, comparing it with existing approaches, and assessing whether adjustment improves power in randomized trials with hierarchical outcomes. Methods: We developed an ordinal regression-based method for covariate adjustment of the win ratio and compared it with three alternatives: probability index models, inverse probability weighting, and a randomization-based estimator. Methods were applied to the EMPEROR-Preserved rial and tested through extensive simulations involving two common hierarchical outcome structures: time-to-event composites, and composites combining time-to-event with quantitative measures. Simulations assessed impacts on estimates, standard errors, and power across prognostic and non-prognostic settings. Results: In RCT data and simulations, covariate adjustment consistently increased power when adjusting for prognostic baseline variables. Gains were comparable to or greater than those in conventional Cox models, with no power loss for non-prognostic covariates. Our ordinal approach performed similarly to existing methods while providing interpretable covariate effect estimates. Adjusting for baseline values of quantitative components yielded power gains according to the baseline-to-follow-up correlation. Conclusions: Covariate adjustment for prognostic variables meaningfully improves efficiency in win ratio analyses for hierarchical outcomes. Our ordinal method is easily implemented and facilitates covariate effect interpretation. We recommend the broader adoption of covariate adjustment and our ordinal method in randomized trials using hierarchical outcomes.

2
Bias and Variance of Adjusting for Instruments

Hripcsak, G.; Anand, T.; Chen, H. Y.; Zhang, L.; Chen, Y.; Suchard, M. A.; Ryan, P. B.; Schuemie, M. J.

2026-03-15 epidemiology 10.64898/2026.03.13.26348328 medRxiv
Top 0.1%
38.8%
Show abstract

Propensity score adjustment is commonly used in observational research to address confounding. Controversy persists about how to select covariates as possible confounders to generate the propensity model. A desire to include all possible confounders is offset by a concern that more covariates will augment bias or increase variance. Much of concern is over instruments, which are variables that affect the treatment but not the outcome. Adjusting for an instrument has been shown to increase bias due to unadjusted confounding and to increase the variance of the effect estimate. Large-scale propensity score (LSPS) adjustment includes most available pre-treatment covariates in its propensity model. It addresses instruments with a pair of diagnostics, ceasing the analysis if any covariate exceeds a correlation coefficient of 0.5 with the treatment and checking for an aggregation of instruments with equipoise reported as a preference score. Our simulation assesses the impact of adjusting for instruments in the context of LSPSs diagnostics. In our simulation, even when the variance of the treatment contributed by the adjusted instrument(s) exceeds an unadjusted confounder by over twenty-fold, when the correlation between the instrument(s) and the treatment was less than 0.5 and the equipoise was greater than 0.5, the additional shift in the effect estimate due to adjusting for the instrument(s) was less than the shift due to confounding by itself. Therefore, we find in this simulation that adjusting for instruments contributed a minor amount of bias to the effect estimate. This simulation aligns well with a previous assessment of the impact of adjusting for instruments and with separate empirical evidence that adjusting for many covariates surpasses attempts to identify a limited set of confounders.

3
An E-value-Informed Sensitivity Analysis Framework for Hybrid Controlled Trials

Liu, C.; Mayer, M.; Lactaoen, K.; Gomez, L.; Weissman, G.; Hubbard, R.

2026-03-06 epidemiology 10.64898/2026.03.05.26347653 medRxiv
Top 0.1%
37.8%
Show abstract

Hybrid controlled trials (HCTs) incorporate real-world data into randomized controlled trials (RCTs) by augmenting the internal control arm with patients receiving the same treatment in routine care. Beyond increasing power, HCTs may improve recruitment by supporting unequal randomization ratios that increase patient access to experimental treatments. However, HCT validity is threatened by bias from unmeasured confounding due to lack of randomization of external controls, leading to outcome non-exchangeability between internal and external control patients. To address this challenge, we developed a sensitivity analysis framework to assess the robustness of HCT results to potential unmeasured confounding. We propose a tipping point analysis that adapts the E-value framework to the HCT setting where trial participation rather than treatment assignment is subject to confounding. To aid interpretation, we also introduce a data-driven benchmark representing the strength of unmeasured confounding reflected by the observed outcome non-exchangeability. We then propose an operational decision rule and evaluate its performance through simulation studies. Finally, we illustrate the approach using an asthma trial augmented by data from electronic health records. Simulation results demonstrate that our decision rule safeguards against Type I error inflation while preserving the power gains achieved by incorporating external data. In settings where moderate unmeasured confounding led to poorer outcomes for external controls, Type I error was controlled near the nominal 5% level, and power increased by 10-20% compared with analyses using RCT data alone. Our approach provides a practical, interpretable method to assess HCT robustness, supporting rigorous inference when integrating external real-world data.

4
Sexual risk behaviours following medical male circumcision: a matched pseudo-cohort analysis using population-based survey data

Mwakazanga, D. K.; daka, v.; Gwasupika, J. K.; Dombola, A. K.; Kapungu, K. K.; Khondowe, S.; Chongwe, G. K.; Fwemba, I.; Ogundimu, E.

2026-04-13 epidemiology 10.64898/2026.04.11.26350676 medRxiv
Top 0.1%
28.8%
Show abstract

Medical male circumcision (MMC) is an established HIV prevention intervention, yet concerns persist that circumcised men may adopt higher-risk sexual behaviours following the procedure. Evidence from observational studies has been inconsistent, partly because many analyses do not adequately distinguish behaviours that occur before circumcision from those that occur afterward. This study assessed the association between MMC and subsequent sexual behaviours while demonstrating how population-based cross-sectional survey data can be adapted to address this temporal challenge. We analysed nationally representative data from the 2024 Zambia Demographic and Health Survey (ZDHS), including men aged 15 - 59 years who reported their circumcision status. Men who had undergone medical circumcision were compared with uncircumcised men using a matched pseudo-cohort framework that reconstructed temporal ordering based on age at circumcision. Propensity score overlap weighting was applied to improve comparability between circumcised and uncircumcised men, and odds ratios were estimated using logistic regression models incorporating overlap weights and accounting for the complex survey design. Sexual behaviour outcomes occurring after circumcision included condom non-use at last sexual intercourse, multiple sexual partners in the past 12 months, self-reported sexually transmitted infection (STI) symptoms, and composite measures of sexual risk behaviour. The analysis included 9,609 men, of whom 33.3% were medically circumcised. MMC was associated with lower odds of condom non-use at last sexual intercourse (adjusted odds ratio [aOR] = 0.75, 95% confidence interval [CI]: 0.67 - 0.85) and lower odds of reporting any sexual risk behaviour (aOR = 0.83, 95% CI: 0.72 - 0.95). No meaningful associations were observed between MMC and reporting multiple sexual partners, self-reported STI symptoms, or higher levels of composite sexual risk behaviour. In this population-based study, MMC was not associated with sexual risk compensation under routine programme conditions within the overlap population defined by the weighting scheme, supporting the behavioural safety of MMC and illustrating the value of explicitly addressing temporality when analysing behavioural outcomes using cross-sectional survey data.

5
Advantages of a Two-Stage Randomized Trial Design to Evaluate Antimicrobial Treatment Strategies: a Simulation Study

Gago, J. E.; Boyer, C.; Lipsitch, M.

2026-03-19 epidemiology 10.64898/2026.03.16.26347803 medRxiv
Top 0.1%
19.5%
Show abstract

BackgroundAntimicrobial prescribing policies affect not only treated patients but also their contacts. Two-stage randomized (2SR) designs can be used to estimate these spillover effects, yet this study design has not been widely applied to evaluate antimicrobial strategies. MethodsWe developed a stochastic agent-based model that simulates a hospital ward with two competing bacterial strains (drug-A-susceptible and drug-A-resistant). We used the simulation to emulate a 2SR trial: six hospital ward clusters were randomized 1:1 to either a 90/10 (90% Drug A, 10% Drug B, drug B was assumed to have no known resistance) or 50/50 treatment allocation strategy; individuals within clusters were then randomized to Drug A or Drug B following the assigned cluster-level allocation strategy. We estimated direct, indirect, total, and overall causal effects on incident infection and mortality. Sensitivity analyses varied the treatment effect, transmission rate, mortality structure, and number of clusters. ResultsThe direct effect of drug choice showed that Drug A recipients had higher mortality (due to non-concordant treatment of resistant infections). This effect varied over time as the wards strain ecology diverged between strategies. There was also an indirect effect for Drug A recipients--reflecting spillover from higher resistant-strain prevalence under 90/10--but it was approximately null for Drug B recipients, whose broad-spectrum coverage insulated them from changes in the ward strain distribution. The overall effect--the policy-level comparison--showed that the 50/50 strategy reduced total mortality, but this net benefit concealed a redistribution: resistant-strain deaths decreased while susceptible-strain deaths increased, a consequence captured by the overall effect but invisible to the direct effect. These findings were qualitatively consistent across all sensitivity scenarios. ConclusionsWe demonstrate that antimicrobial prescribing produces spillover effects not captured by conventional individually randomized trials. These effects can substantially alter treatment outcomes in a population. We propose that the 2SR design, grounded in a formal causal framework for interference, is better suited for evaluating population-level effects of antimicrobial strategies--whether implemented as a randomized trial or emulated with observational data.

6
Novel Representations of Vaccine Protection Against Progression to Severe Disease Over Time

Dean, N.; Zarnitsyna, V.

2026-02-14 epidemiology 10.64898/2026.02.12.26346197 medRxiv
Top 0.1%
17.3%
Show abstract

BackgroundVaccines can prevent severe disease by preventing infection or by reducing progression among those who become infected. Vaccine effectiveness against progression given infection is often used to quantify this second mechanism, but it conditions on infection, which is itself affected by vaccination. As a result, this estimand lacks a clear causal interpretation and may behave non-intuitively over time. MethodsWe introduce a conceptual framework that models protection against infection and protection against progression as separate components that wane over time. Protection is represented using individual-level threshold-crossing times that depend on covariates and define a time-varying population susceptible to infection. Within this framework, we derive standard vaccine effectiveness estimands and propose two alternative decompositions of protection against severe disease: a progression-risk-weighted multiplicative decomposition and an additive decomposition based on absolute risk reduction. We illustrate their behavior using simulated examples. ResultsThe weighted multiplicative decomposition restores a causal interpretation for progression protection within the doomed principal stratum and avoids negative estimates. The additive decomposition provides a clear representation of the pathways over time. ConclusionsExplicitly modeling the infection-to-severe-disease pathway improves interpretation of vaccine effectiveness under waning immunity.

7
Understanding unexpected results from randomized clini{square}cal trials Does coffee reduce atrial fibrillation recurrences?

Brophy, J. M.

2026-04-17 cardiovascular medicine 10.64898/2026.04.13.26350787 medRxiv
Top 0.1%
17.2%
Show abstract

ObjectiveTo explore the interpretation of unexpected results from a randomized controlled trial (RCT). Study Design and SettingAdjunctive frequentist (power and type{square}M error) and Bayesian analyses were performed on a recently published RCT reporting a statistically significant relative risk reduction (p <0.01) for caffeinated coffee drinkers compared with abstinence on atrial fibrillation (AF) recurrence. Individual patient data for the Bayesian survival models were reconstructed from the RCT published material and priors informed by the RCT power calculations. ResultsThe original RCT design had limited power for realistic effect sizes, increasing susceptibility to type{square}M (magnitude) error. Bayesian analyses also tempered the benefit for caffeinated coffee implied by standard statistical analysis resulting in only modest probabilities of clinically meaningful risk reductions (e.g., hazard ratio < 0.9 of 88% or a risk difference > 2% of 82%). ConclusionsSupplemental frequentist and Bayesian approaches can provide robustness checks for unexpected RCT findings, providing contextualization, clarifying distinctions between statistical and clinical significance, and guiding replication needs. HighlightsO_LIRandomized controlled trial (RCT) results may be unexpected and challenge prior beliefs C_LIO_LISupplemental frequentist and Bayesian analyses can clarify interpretation of surprising findings C_LIO_LIPower and type{square}M error assessments help evaluate design adequacy for realistic effects C_LIO_LIBayesian posterior probabilities provide additional nuanced insights into contextulaization and clinical significance C_LI

8
Methodological Guidance for Predictor Variable Selection for Adolescent Smoking Outcomes in Global Youth Tobacco Survey Using R and Python

Ng'ambi, W. F.; Zyambo, C.; Kazembe, L.

2026-02-17 epidemiology 10.64898/2026.02.14.26346305 medRxiv
Top 0.1%
12.7%
Show abstract

BackgroundThe Global Youth Tobacco Survey (GYTS) is widely used to monitor tobacco use among adolescents worldwide. However, inconsistent analytical approaches particularly in handling complex survey designs and predictor selection limit comparability across countries, survey waves, and software platforms. Although much of the GYTS literature relies on proprietary tools such as SAS and SPSS, practical and transparent guidance on implementing reproducible, theory-informed analyses remains limited. A unified workflow that respects the surveys design while supporting cross-platform implementation is needed. MethodsWe developed a reproducible, open-source workflow for analysing GYTS data using R and Python. In R, analyses were conducted using the survey package (svydesign and svyglm) with constrained stepwise selection via stepAIC. In Python, a custom constrained stepwise procedure was implemented using statsmodels generalized linear models. The workflow explicitly incorporates survey weights, stratification, and clustering; harmonises variables across countries; protects a priori demographic covariates; and ensures consistent treatment of categorical predictors. The approach is illustrated using data from Zambia (n = 2,959) and pooled data from Ghana, Mauritius, Seychelles, and Togo (n = 15,914). Predictor selection was guided by Social Cognitive Theory and evidence from systematic reviews. ResultsThe constrained selection framework consistently retained key demographic variables (age, sex, and grade) while allowing data-driven selection of modifiable predictors using the Akaike Information Criterion. When identical constraints were applied, the R and Python implementations selected identical models and produced nearly equivalent point estimates (adjusted odds ratio differences <0.01), although Python-based confidence intervals did not account for clustering. Of 18 candidate predictors across individual, social, media, and policy domains, 14 were retained. The strongest independent predictors included awareness of tobacco products (OR = 5.61, 95% CI: 4.65- 6.78), peer smoking (OR = 4.57, 95% CI: 3.34-6.25), and exposure to tobacco marketing (OR = 2.34, 95% CI: 1.89-2.91). ConclusionsThis study provides a generalisable, theory-informed framework for predictor selection in complex survey data using open-source tools. The workflow supports consistent analyses across countries, survey waves, and software platforms, and is transferable to other youth and adult population surveys. All code and harmonisation resources are openly available to support reproducibility and adaptation. Plain-Language SummaryO_LIWhat we asked: Can we predict adolescent smoking using GYTS data in a way that is easy to follow and reproducible across software? C_LIO_LIWhat we did: Built a single workflow that respects survey design (weights, strata, clusters) and selects predictors using four explicit criteria: theoretical grounding in Social Cognitive Theory, empirical support from prior studies, relevance for intervention, and cross-country validity. Core demographics (age, sex, grade, region) were protected as essential confounders, while other predictors were selected based on statistical fit. The workflow runs equivalently in R and Python. C_LIO_LIWhy it matters: Many GYTS studies use weights only and ignore clustering and stratification, which makes confidence intervals too narrow. More importantly, most analyses include variables arbitrarily or let software drop important confounders automatically. Our approach ensures theoretically meaningful, policy-relevant variables are retained, producing more reliable and actionable results for prevention programs. C_LI

9
Causal estimands and target trials for the effect of lag time to treatment of cancer patients

Goncalves, B. P.; Franco, E. L.

2026-04-08 epidemiology 10.64898/2026.04.07.26350338 medRxiv
Top 0.1%
12.6%
Show abstract

Timeliness of therapy initiation is a fundamental determinant of outcomes for many medical conditions, most importantly, cancer. Yet, existing inefficiencies in healthcare systems mean that delays between diagnosis and treatment frequently adversely affect the clinical outcome for cancer patients. Although estimates of effects of lag time to therapy would be informative to policymakers considering resource allocation to minimize delays in oncology, causal methods are seldom explicitly discussed in epidemiologic analyses of these lag times. Here, we propose causal estimands for such studies, and outline the protocol of a target trial that could be emulated with observational data on lag times. To illustrate the application of this approach, we simulate studies of lag time to treatment under two scenarios: one in which indication bias (Waiting Time Paradox) is present and another in which it is absent. Although our discussion focuses on oncologic outcomes, components of the proposed target trial could be adapted to study delays for other medical conditions. We believe that the clarity with which causal questions are posed under the target trial emulation framework would lead to improved quantification of the effects of lag times in oncology, and hence to better informed policy decisions.

10
Integrating stakeholder perspectives in modeling routine data for therapeutic decision-making

Pfaffenlehner, M.; Dressing, A.; Knoerzer, D.; Wagner, M.; Heuschmann, P.; Scherag, A.; Binder, H.; Binder, N.

2026-02-18 epidemiology 10.64898/2026.02.18.26346074 medRxiv
Top 0.1%
10.5%
Show abstract

BackgroundRoutinely collected health data are increasingly used to generate real-world evidence for therapeutic decision-making. Yet, stakeholders, including clinicians, pharmaceutical industry representatives, patient advocacy groups, and statisticians, prioritize different aspects of data quality, analysis, and interpretation. Without explicit consideration of these perspectives, analyses risk being fragmented, misaligned with end-user needs, or lacking transparency. MethodsWe developed a stakeholder-inclusive conceptual framework for modeling routine health data, informed by an interdisciplinary workshop and supported by targeted literature examples. The framework maps stakeholder priorities to methodological requirements and identifies analytical strategies that enable integration of diverse perspectives. ResultsClinicians prioritize interpretability and clinical relevance; the pharmaceutical industry emphasizes regulatory compliance and real-world evidence generation; patient groups highlight transparency, inclusion of patient-reported outcomes, and privacy protection; and statisticians focus on bias control and methodological rigor. Our framework illustrates how these priorities can be explicitly incorporated into modeling strategies. Multistate models exemplify a methodological approach that operationalizes these requirements by capturing dynamic disease trajectories, integrating intermediate outcomes, and offering graphical interpretability. Beyond specific methodological choices, clinical research relies fundamentally on statistical expertise. Depending on the research goal, statisticians roles can range from providing statistical consultations for standard analyses to applying or adapting advanced methods for more complex analyses to developing new methods for research questions that require novel approaches due to their specific characteristics. ConclusionsThe stakeholder-inclusive framework provides methodological guidance for designing analyses of routine health data that are clinically meaningful, scientifically rigorous, and socially acceptable. By aligning the research question with the intended perspective from the beginning, it supports more robust and transparent evidence generation, with multistate models serving as a flexible tool to operationalize this integration.

11
Estimating the impact of Shigella vaccines on growth outcomes and implications for clinical trial design

Codi, A. M.; Rogawski McQuade, E.; Benkeser, D.

2026-04-04 epidemiology 10.64898/2026.04.03.26350105 medRxiv
Top 0.1%
10.1%
Show abstract

Background: The value proposition for Shigella vaccines is strengthened by the potential for vaccines to prevent linear growth faltering. However, because expected effect sizes in Phase 3 vaccine trials are small due to limited Shigella incidence, a simple comparison of growth by randomized vaccine arm is likely underpowered and may yield null or even inverse results. Methods: We consider a new approach that estimates vaccine effects in the subgroup that would be infected in absence of vaccination, termed the naturally infected. In simulations parameterized by multi-site studies of diarrhea, we compare power for detecting linear growth effects in the naturally infected versus the full study. We further quantified how power is impacted by trial design choices including immunization schedule, study site, and timing of growth measurements. Findings: Simple comparisons of height-for-age z-score (HAZ) by randomized vaccine arm have extremely limited power (<15%) at realistic trial sizes (n=2,500 to 20,000) and carry risk of showing an inverse effect due to random chance. In contrast, naturally infected effects were five to ten times larger and power was up to three times higher. Using a twelve month immunization schedule with a single growth endpoint in high-incidence settings maximized power to detect an effect. Interpretations: While realistically sized clinical trials may be underpowered to detect an effect of vaccination on growth, estimation using the naturally infected subpopulation and careful trial design improve chances of detecting an effect while mitigating risks of null or inverse results.

12
Using Negative Control Outcomes to Detect Selection Bias in Mendelian Randomization Studies

Gkatzionis, A.; Davey Smith, G.; Tilling, K.

2026-02-01 epidemiology 10.64898/2026.01.30.26345215 medRxiv
Top 0.1%
8.6%
Show abstract

Mendelian randomization is currently mainly implemented through the use of genetic variants as instrumental variables to investigate the causal effect of an exposure on an outcome of interest. Mendelian randomization studies are robust to confounding bias and reverse causation, but they remain susceptible to selection bias; for example, this can happen if the exposure or outcome are associated with selection into the study sample. Negative controls are sometimes used to detect biases (typically due to confounding) in observational studies. Here, we focus specifically on Mendelian randomization analyses and discuss under what conditions a variable can be used as a negative control outcome to detect selection mechanisms that could bias Mendelian randomization estimates. We show that the main requirement is that the negative control outcome relates to confounders of the exposure and outcome. Counter-intuitively, the effect of the negative control on selection is of secondary concern; for example, a variable that does not affect selection can be a valid negative control for an outcome that does. We also investigate under what conditions age and sex can be used as negative control outcomes in Mendelian randomization analyses. In a real-data application, we investigate the pairwise causal relationships between 19 traits, utilizing data from the UK Biobank. Treating biological sex as a negative control outcome, we identify selection bias in analyses involving commonly used traits such as alcohol consumption, body mass index and educational attainment.

13
Transportability of missing data models across study sites for research synthesis

Thiesmeier, R.; Madley-Dowd, P.; Ahlqvist, V.; Orsini, N.

2026-03-10 epidemiology 10.64898/2026.03.09.26347913 medRxiv
Top 0.1%
8.5%
Show abstract

IntroductionSystematically missing covariates are a common challenge in medical research synthesis of quantitative data, particularly when individual participant data cannot be shared across study sites. Imputing covariate values in studies where they are systematically unobserved using information from sites where the covariate is observed implicitly assumes similarity of associations across studies. The behaviour of this assumption, and the bias arising from violating it, remains difficult to qualitatively reason about. Here, we evaluated a two-stage imputation approach for handling systematically missing covariates using simulations across a range of statistical and causal heterogeneity scenarios. MethodsWe conducted a simulation study with varying degrees of between-study heterogeneity and systematic differences in model parameters. A binary confounder was set to systematically missing in half of the studies. Study-specific effect estimates were combined using a two-stage meta-analytic model. The performance of the imputation approach was evaluated with the primary estimand being the pooled conditional confounding-adjusted exposure effect across all studies. ResultsBias in the pooled adjusted effect estimate was small across scenarios with low to substantial between-study heterogeneity. Bias increased monotonically with increasingly pronounced differences in causal structures across study sites. Coverage remained close to the nominal level under low to substantial between-study heterogeneity, but deteriorated markedly as differences in causal structures between study sites became more severe. ConclusionThe two-stage cross-site imputation approach produced valid pooled effect estimates across a wide range of simulated scenarios but showed monotonic sensitivity to differences in causal structures across studies. The results provide insight into the conditions under which cross-site imputation may be appropriate for handling systematically missing covariates in research synthesis.

14
Homicide in Pregnant and Postpartum versus Nonpregnant and Nonpostpartum Populations: Re-estimation of a Rate Ratio using a Person-time Framework

McNellan, C. R.; Marquez, N.; Alexander, M.

2026-01-26 epidemiology 10.64898/2026.01.25.26344756 medRxiv
Top 0.1%
8.2%
Show abstract

We aim to re-estimate the national homicide rate ratio between nonpregnant/nonpostpartum and pregnant/postpartum women accounting for person-time exposure, which prior studies overlooked. Using a theoretical framework for descriptive epidemiology, we complete a retrospective analysis to estimate the pregnancy-associated homicide rate and re-estimate the national homicide rate ratio between pregnant/postpartum and nonpregnant/nonpostpartum populations in 2020. We use National Vital Statistics System death, fetal death, birth, and Census Bureau data to identify the population at risk. We compare mortality rates and 95% confidence intervals overall and stratified by race, ethnicity, and age. Among the 9,905,908 pregnancies contributing person-time, there were 185 homicides. The relative homicide risk was 35% higher among nonpregnant/nonpostpartum compared to pregnant/postpartum populations. Pregnancy was only associated with elevated risk among ages 10-19 (homicide rate ratio 3.82; 95% CI 2.39-5.77). Homicide rate ratios between nonpregnant/nonpostpartum and pregnant/postpartum women calculated accounting for exposure time and pregnancy transitions contradict previous estimates. Accurate assessment of mortality rates is essential to develop strategies protective against maternal mortality.

15
An intuitive sampling framework for setting-specific decision-making in soil-transmitted helminthiasis control programs

Kazienga, A.; Levecke, B.; de Vlas, S. J.; Coffeng, L. E.

2026-02-14 epidemiology 10.64898/2026.02.11.26346062 medRxiv
Top 0.1%
7.0%
Show abstract

BackgroundWe recently developed a general egg count framework to support cost-efficient survey design choices to inform soil-transmitted helminthiasis (STH) control programs. Yet, the interpretation and the application was not always intuitive for program managers. MethodsWe first adapted the existing framework to make the interpretation of risks of incorrect decision making more intuitive and to allow for prior information. Then, we assessed the impact of the allowable risk of incorrect decision-making and prior information on the required sample size. Finally, we determined the most cost-efficient survey design to inform the decisions (i) to switch to an event-based deworming program, and (ii) to declare STH eliminated as a public health problem (EPHP). Principal findingsThe required sample sizes increased when the allowable risk of incorrect decision reduced and when the mean prior approached the program prevalence threshold. For the decisions to switch to event-based deworming and to declare EPHP, we found that duplicate Kato-Katz thick smears on a single stool sample was the most cost-efficient survey design, particularly when particularly when accounting for the added benefits of the free internal quality control. The required sample size for these survey designs varied between program targets and STH species. When aiming to have one sample size that fits all STHs, we recommend sampling 6 schools and 56 children per school for decisions on switching to event-based control programs and 11 schools (74 children per school) for the decision to declare EPHP. Conclusions/significanceWe developed an intuitive sampling framework for setting-specific decision-making in STH control programs. We identified the most cost-efficient survey designs for critical program decisions, but these are based on subjective but reasonable choices regarding the risk of incorrect decision making. Reaching consensus within the STH community on acceptable levels of risk is crucial to further support evidence-based decision-making. Author summaryWe recently developed a general computer simulation framework to support cost-efficient survey design choices for the control of intestinal worms. However, its interpretation was not always intuitive and it did not allow incorporation of prior knowledge on the prevalence of infections that programs might have. In this study, we adapted our framework to make the risks of incorrect decision-making more intuitive to interpret and to incorporate prior information on worm prevalence. We then quantified how different risk tolerances and prior prevalence assumptions affected required survey designs. Using this framework, we then identified the most cost-efficient survey designs for two key program decisions: switching to event-based deworming and declaring elimination of intestinal worms as a public health problem. We found that lower tolerance for incorrect decisions and greater uncertainty around prior prevalence substantially increase required sample sizes. Across the different program decisions and worm species, examining duplicate Kato-Katz thick smears from a single stool sample was consistently the most cost-efficient design, with the added benefit of internal quality control. Our results provide practical guidance for designing surveys tailored to local settings and highlight the importance of reaching consensus on acceptable levels of decision-making risk to support evidence-based STH control.

16
Methodological Considerations in Sibling Analyses of Prenatal Acetaminophen

Ahlqvist, V. H.; Sjoqvist, H.; Gardner, R. M.; Lee, B. K.

2026-03-30 epidemiology 10.64898/2026.03.27.26349515 medRxiv
Top 0.1%
6.9%
Show abstract

Background: Sibling-matched designs control for shared familial confounding but remain vulnerable to non-shared confounders. Bi-directional sensitivity analyses, which stratify families by whether the older or younger sibling was exposed, are commonly used to assess carryover effects. We aimed to demonstrate how this methodological approach can introduce severe confounding by parity. Methods: We conducted simulations motivated by a recent epidemiological study. The true causal effect of a hypothetical exposure (prenatal acetaminophen) on neurodevelopmental outcomes was set to strictly null. To introduce parity-related confounding, baseline exposure and outcome probabilities were varied slightly by birth order. We compared conditional logistic regression effect estimates from total sibling models against bi-directional stratified models. Results: In the total simulated sibling cohort, models yielded the true null effect (odds ratio = 1.00) when adjusting for parity. However, the bi-directional analyses exhibited divergent artifactual signals. Because parity is perfectly collinear with exposure in these stratified subsets, it cannot be adjusted for. For example, when the older sibling was exposed, the odds ratio for autism spectrum disorder was 1.68; when the younger was exposed, the odds ratio was 0.60. Conclusions: Divergent estimates in bi-directional sibling analyses can be a predictable artifact of parity confounding rather than evidence of carryover effects or invalidating unmeasured bias. Overall sibling models adjusting for parity may remain robust despite divergent stratified sensitivity results.

17
Cross-Tabulating Epidemiological Covariates with AUDIT-C Data in Large-Scale Biobanks

Blackburn, A.

2026-04-03 epidemiology 10.64898/2026.04.01.26349975 medRxiv
Top 0.1%
6.9%
Show abstract

Introduction: The Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) is a widely utilized screening tool in large-scale electronic health record (EHR) biobanks. However, its categorical, range-based survey responses present a significant challenge for epidemiological research, especially where continuous quantitative variables may be preferred. Standard workarounds, such as assigning categorical midpoints or utilizing aggregate ordinal scores for regression mapping often introduce false mathematical precision or obscure critical behavioral nuances between drinking frequency and quantity. This report presents a novel framework for presenting and bounding categorical alcohol survey data. Materials and Methods: I developed two complementary descriptive techniques: (1) a two-dimensional cross-tabulation matrix that preserves the interaction between drinking frequency and typical quantity, and (2) a systematic bounding algorithm that applies time-interval correction factors to calculate strict lower and upper estimates of average daily alcohol consumption. To demonstrate the real-world utility of this framework, I applied these methods to three analytical descriptive scenarios within a European ancestry (EUR) cohort of the All of Us Research Program: Generalized Anxiety Disorder (GAD) prevalence (n=104,893), minor allele frequency (MAF) for the rs1229984 genetic variant (n=104,890), and self-reported active duty military service history (n=104,893). Results: Application of the cross-tabulation matrix revealed patterns across all three descriptive scenarios. For example, participants reporting the highest frequency ("4 or more times a week") combined with the highest quantity ("10 or More" drinks) demonstrated a GAD prevalence of 13.5%, compared to 5.8% among those reporting the same frequency but a low quantity ("1 or 2" drinks). A general trend of increased anxiety in higher quantity drinkers contrasts with a general trend of decreased anxiety in higher frequency drinkers. Bounding estimates for average daily consumption ranged from 0.299 to 0.730 drinks for individuals with GAD, and 0.303 to 0.787 for those without. Those who reported having been active duty in the US Armed Forces demonstrated a general trend toward more frequent drinking and higher average daily consumption estimates (0.339 to 0.875) than those who had not (0.297 to 0.770). The minor allele of the genetic variant rs1229984 exhibited a clear effect reducing both frequency and quantity, resulting in lower average daily consumption estimates. Conclusions: This bounding and mapping framework provides researchers with an additional method to traditional midpoint and aggregate scoring methods. By explicitly defining the uncertainty inherent in categorical survey instruments and visualizing cohort distributions across intersecting behavioral axes, this methodology improves the resolution, reproducibility, and interpretability of lifestyle exposure data.

18
Comparison of methods for assessing effects of risk factors on disease progression in Mendelian randomization under index event bias

Zhang, L.; Higgins, I. A.; Dai, Q.; Gkatzionis, A.; Quistrebert, J.; Bashir, N.; Dharmalingam, G.; Bhatnagar, P.; Gill, D.; Liu, Y.; Burgess, S.

2026-03-02 epidemiology 10.64898/2026.02.26.26347193 medRxiv
Top 0.1%
6.6%
Show abstract

Mendelian randomization has emerged as a transformative approach for inferring causal relationships between risk factors and disease outcomes. However, applying Mendelian randomization to disease progression - a critical step in validating pharmacological targets - is hampered by index event bias. This form of selection bias occurs because analyses of disease progression are necessarily restricted to individuals who have already experienced the disease event. Here, we present a comprehensive evaluation of statistical methods designed to mitigate index event bias, including inverse-probability weighting, Slope-Hunter, and multivariable methods. We compare the performance of these methods in simulations and applied examples. Inverse-probability weighting methods reduce bias, but require individual-level data and will only fully eliminate bias when the disease event model is correctly specified. Slope-Hunter performed poorly in all simulation scenarios, even when its assumptions were fully satisfied. Multivariable methods worked best when including genetic variants that affect the incident disease event. However, if these genetic variants also affect disease progression directly, then the analysis will suffer from pleiotropy. Hence, if the same biological mechanisms affect disease incidence and progression, then multivariable methods will have little utility. But in such a case, analyses of disease progression are less critical, as conclusions reached from analyses of disease incidence are likely to hold for disease progression. Our findings indicate that no single method is a universal solution to provide reliable results for the investigation of disease progression. Instead, we propose a strategic framework for method selection based on data availability and biological context.

19
The research fatigue and beneficence scale: development and validation in a nationwide cohort of transgender women in the United States and Puerto Rico

Stevenson, M.; Reisner, S.; Pontes, C.; Linton, S.; Borquez, A.; Radix, A.; Schneider, J.; Cooney, E.; Wirtz, A.; ENCORE Study Group,

2026-04-15 epidemiology 10.64898/2026.04.13.26350829 medRxiv
Top 0.1%
6.5%
Show abstract

Transgender women are routinely recruited for HIV prevention research and describe feeling over-researched, undervalued, and disconnected from the benefits of research. Research fatigue refers to the adverse impacts of research participation from the volume, frequency, or intensity of research engagement. Research beneficence, an underdeveloped construct, refers to perceptions that research participation is empowering, appreciated, and beneficial to individuals and communities. This study sought to develop and psychometrically evaluate a research fatigue and beneficence scale and examine associations with cohort retention and study procedures among transgender women in the US and Puerto Rico. We developed a novel 7-item measure of research fatigue and beneficence informed by prior literature and qualitative work with transgender women. We assessed internal consistency reliability, factor structure, convergent and divergent validity, and predictive validity with 6-month study retention outcomes and procedures among 2189 transgender women enrolled in a US nationwide cohort (April 2023-December 2024) for the full 7-item research fatigue and beneficence scale, a 4-item research beneficence subscale, and a single-item research fatigue measure. Research beneficence items demonstrated good internal consistency (0.78) and excellent model fit. Research fatigue and beneficence varied by race/ethnicity with participants of color reporting both greater empowerment and greater concerns about community-level benefits. The item "I feel that I am asked to participate in research too frequently" was associated with lower 6-month retention, greater survey missingness, and preference for less invasive HIV testing modalities. Findings highlight multiple dimensions of research experience and the need for reduced participant burden, culturally tailored study designs, and intentional dissemination efforts to improve participant-centered research practices.

20
Constructing and analyzing a synthetic life course cohort based on pooling two data sources: A case study of early adulthood depression symptomatology and late-life cognition

Zimmerman, S. C.; Buto, P.; Kezios, K.; Zeki Al Hazzouri, A.; Glymour, M. M.

2026-02-27 epidemiology 10.64898/2026.02.25.26347113 medRxiv
Top 0.1%
4.9%
Show abstract

BackgroundSynthetic cohorts created by combining two cohorts can be useful when no single data set includes both the exposure and outcome data of interest. We estimate the effects of depression in early adulthood on later-life memory outcome using two nationally representative cohorts separately and in a synthetic sample. MethodsWe used the National Longitudinal Study of Youth 1979 (NLSY; N=5,747) and the Health and Retirement Study (HRS; N=6,846) and a synthetic cohort combining exposure data from N=5,680 NLSY participants (born 1957-1965) aged 55-63 in 2020 who completed midlife cognitive assessment between 2006-2020 with outcome data from N=9,726 HRS participants born 1957-1964 who completed cognitive assessments when 47-63 years old and every 2-years thereafter. A 6-item version of the Centers for Epidemiologic Studies-Depression (CES-D) score (range 0-6) was measured from late adolescence through midlife in NLSY and in midlife in HRS. Memory was measured as the sum of immediate and delayed word recall scores up to twice in NLSY at age 48+ and up to 10 times in HRS at age 50+. We generated a synthetic life course cohort, matching HRS participants to NLSY participants based on 10 variables measured in midlife in both cohorts and posited to either confound or mediate the association between early life depressive symptoms and late-life memory. Matching variables included midlife depression and memory. We used confounder-adjusted linear mixed models to estimate the association between earliest reported depressive symptoms in NLSY and HRS with memory in the respective data sets and evaluated associations of early life depression symptoms with the repeated later life memory measures in the synthetic cohort. ResultsIn NLSY, each increment in CES-D at age 23-31 was associated with lower average memory scores ({beta}NLSY_level=-0.050 95%CI (-0.097,-0.003)) in midlife but no detectable difference in rate of memory decline ({beta}NLSY_slope=-0.070 95%CI (-0.382,0.242). In HRS, CES-D at average age 53 was associated with lower average memory ({beta}HRS_level=-0.163 (-0.199, -0.128)) but not rate of decline ({beta}HRS_slope=-0.021 (-0.062, 0.020)). In the synthetic cohort, CES-D at age 23-27 was associated with lower memory score at age 50+ ({beta}synth_level=-0.044 95%CI (-0.085,-0.003)) but not associated with rate of cognitive decline ({beta}synth_slope=0.005 95%CI (-0.052,0.062)). ConclusionsDepressive symptoms ages 23-31 predicted mid- to late-life memory function but had no clear association with memory decline. Combining data across cohorts spanning separate, but overlapping, parts of the life course is a promising approach to overcome data limitations in life course research, but it requires careful implementation to ensure that assumptions are met and estimates are appropriately interpreted.