Epidemics
○ Elsevier BV
Preprints posted in the last 7 days, ranked by how well they match Epidemics's content profile, based on 104 papers previously published here. The average preprint has a 0.07% match score for this journal, so anything above that is already an above-average fit.
Bahig, S.; Oughton, M.; Vandesompele, J.; Brukner, I.
Show abstract
In dense urban settings, delays between diagnostic sampling and effective isolation can sustain transmission during peak infectiousness. We define a waiting-window transmission externality arising when infectious individuals remain mobile while awaiting results, formalized as E = N{middle dot}P{middle dot}TR{middle dot}D, where N is daily testing volume, P test positivity, TR transmission during the waiting period, and D turnaround time. Using Monte Carlo simulation and a susceptible-infectious-recovered (SIR) framework, we quantify excess infections per 1,000 tests/day under multiple diagnostic workflows. A surge scenario incorporates positive coupling between TR and D ({rho} = 0.45), reflecting co-occurrence of laboratory saturation and elevated contacts during system stress. Under centralized 48-hour workflows, excess infections reach [~]80 at P = 10% and [~]401 at P = 50%, increasing to [~]628 under surge conditions. In contrast, near-patient rapid testing and home sampling reduce this to [~]5 and [~]25-26, respectively. Workflows that eliminate the waiting window--either through immediate isolation at sampling or through home-based PCR that returns results at the point of collection--effectively collapse the transmission term. These findings identify diagnostic delay as a modifiable driver of epidemic dynamics. Operational redesign of testing workflows, including decentralized sampling and home-based molecular diagnostics, offers a scalable pathway to improve epidemic controllability and reduce inequities in dense urban environments.
RAZAFIMAHATRATRA, S. L.; RASOLOHARIMANANA, L. T.; ANDRIAMARO, T. M.; RANAIVOMANANA, P.; SCHOENHALS, M.
Show abstract
Interpreting serological data remains challenging, particularly in low prevalence or cross reactive contexts, where antibody responses often show substantial overlap between exposed and unexposed individuals and may depart from normal distributional assumptions. Conventional cutoff based approaches often yield inconsistent or biased estimates of seroprevalence. Here, we present a decisional framework based on finite mixture models (FMMs) that enhances the robustness and interpretability of serological analyses. Beyond simply applying mixture models, our framework integrates multiple methodological innovations : (i) systematic comparison of Gaussian and skew normal mixture models to accommodate asymmetric antibody distributions; (ii) rigorous model selection using the Cramer von Mises test (p > 0.01) combined with a parsimonious score (APS) to prioritize models with well separated clusters; and (iii) hierarchical clustering of posterior probabilities to collapse latent components into biologically meaningful seronegative and seropositive groups. Applied to chikungunya virus (CHIKV) data from Bangladesh, the framework produced prevalence estimates consistent with ROC based methods while probabilistically identifying borderline cases. Validation on SARS CoV 2 and dengue datasets further demonstrated its generalizability: for SARS CoV 2, the approach identified up to five latent clusters with high sensitivity (up to 100%) and specificity (up to 100%), enabling discrimination by disease severity. For dengue, it revealed interpretable subgrouping consistent with background exposure and subclinical infection, despite limited confirmed cases. By integrating distributional flexibility, robust goodness of fit testing, and biologically guided cluster consolidation, this decisional FMM framework provides a reproducible and scalable method for serological interpretation across pathogens and epidemiological settings, addressing key limitations of threshold based classification.
Garcia Quesada, M.; Wallrafen-Sam, K.; Kiti, M. C.; Ahmed, F.; Aguolu, O. G.; Ahmed, N.; Omer, S. B.; Lopman, B. A.; Jenness, S. M.
Show abstract
Non-pharmaceutical interventions (NPIs) have been important for controlling SARS-CoV-2 transmission, particularly before and during initial vaccine rollout. During the pandemic, the US Centers for Disease Control and Prevention issued isolation and masking guidance in case of COVID-19-like illness, a positive SARS-CoV-2 test, or known exposure to SARS-CoV-2. However, the impact of this guidance on mitigating transmission in office workplaces is unclear. We used a network-based mathematical model to estimate the impact of this guidance on SARS-CoV-2 transmission among office workers and their communities. The model represented social contacts in the home, office, and community. We used data from the CorporateMix study to parametrize social contacts among office workers and calibrated the model to represent the COVID-19 epidemic in Georgia, USA from January 2021 through August 2022. In the reference scenario (58% adherence to guidance among office workers and the broader population), workplace transmission accounted for a small fraction of total infections. Reducing adherence among office workers to 0% increased workplace transmissions by 27.1% and increasing adherence to 75% reduced workplace transmission by 7.0%. Increasing adherence to 75% among office workers had minimal impact on symptomatic cases and deaths; increasing it among the broader population was more effective in reducing office worker cases and deaths. In our model, moderate adherence to recommended NPIs in workplaces was effective in reducing transmission, but increasing adherence had limited benefit given workplaces that have low contact intensity and hybrid work arrangements. These results underscore the public health benefits of community-wide adoption of recommended NPIs.
Gada, L.; Afuleni, M. K.; Noble, M.; House, T.; Finnie, T.
Show abstract
Knowing the mortality rates associated with infection by a pathogen is essential for effective preparedness and response. Here, harnessing the flexibility of a Bayesian approach, we produce an estimate of the Infection Fatality Ratio (IFR) for A(H5N1) conditional on explicit assumptions, and quantify the uncertainty thereof. We also apply the method to first-wave COVID-19 data up to March 2020, demonstrating the estimates that could be obtained were the model available then. Our analysis uses World Development Indicators (WDI) from the World Bank, the A(H5N1) WHO confirmed cases and deaths tracker by country (2003-2024), and COVID-19 cases and deaths data from John Hopkins University (January and February 2020). Since infectious disease dynamics are typically influenced by local socio-economic factors rather than political borders, individual countries are placed within clusters of countries sharing similar WDIs relevant to respiratory viral diseases, with clusters derived by performing Hierarchical Clustering. To estimate the IFR, we fit a Negative Binomial Bayesian Hierarchical Model for A(H5N1) and COVID-19 separately. We explicitly modelled key unobserved parameters with informative priors from expert opinion and literature. By modelling underreporting, our analysis suggests lower fatality (15.3%) compared to WHO's Case Fatality Ratio estimate (54%) on lab-confirmed cases. However, credible intervals are wide ([0.5%, 64.2%] 95% CrI). Therefore, good preparedness for a potential A(H5N1) pandemic implies adopting scenario planning under our central estimate, as well as for IFRs as high as 70%. Our approach also returns a COVID-19 IFR estimate of 2.8% with [2.5%, 3.1%] 95% CrI which is consistent with literature.
Ouedraogo, F. A. S.
Show abstract
Despite the evolution of epidemiological analysis and modeling tools, difficulties still remain, especially in developing countries, regarding the availability and use of these tools. Often expensive, requiring high technical expertise, demanding constant connectivity of several or sometimes even significant resources, these tools, although efficient, present a major gap with the operational realities of health districts. It is in this context that we introduce Episia, an open-source Python library designed and conceived to provide a framework to facilitate epidemiological analysis and modeling. It integrates a suite of compartmental epidemic models (SIR, SEIR, SEIRD) with a sensitivity analysis using the Monte Carlo method, a complete biostatistics suite validated against the OpenEpi reference standard, as well as a native DHIS2 client for automated data ingestion. Developed in Burkina Faso, it is optimized and aims not only to address these health challenges encountered in Africa but also remains a versatile tool for global health informatics.
Colliot, L.; Garrot, V.; Petit, P.; Zhukova, A.; Chaix, M.-L.; Mayer, L.; Alizon, S.
Show abstract
Understanding the dynamics of HIV epidemics is important to control them effectively. Classical methods that mainly rely on occurrence data are limited by the fact that an unknown part of the epidemic eludes sampling. Since the early 2000s, phylodynamic methods have enabled the estimation of key epidemiological parameters from virus genetic sequence data. These methods have the advantage of being less sensitive to partial sampling and to provide insights about epidemic history that even predates the first samples. In this study, we analysed 2,205 HIV sequences from the French ANRS PRIMO C06 cohort. We identified and were able to reconstruct the temporal dynamics of two large clades that represent the HIV-1 epidemics in the country. Using Bayesian phylodynamic inference models, we found that the first clade, from subtype B, originated in the end of 1970s, grew rapidly during the 80s before decreasing from 2000 to 2015 and stagnating since then. The second clade, from circulating recombinant form CRF02_AG, emerged and spread in the 80s, grew again in the early 2000s, before declining slightly. We also estimated key epidemiological parameters associated with each clade. Finally, using numerical simulations, we investigated prospective scenarios and assessed the possibility to meet the 2030 UNAIDS targets. This is one of the rare studies to analyse the HIV epidemic in France using molecular epidemiology methods. It highlights the value of routine HIV sequence data for studying past epidemic trends or designing public health policies.
Robert, A.; Goodfellow, L.; Pellis, L.; van Leeuwen, E.; Edmunds, W. J.; Quilty, B. J.; van Zandvoort, K.; Eggo, R. M.
Show abstract
BackgroundIn England, the burden of respiratory infections varies by ethnicity, contributing to health inequalities, but the role of additional demographic factors remains underexplored. We quantified how differences in social mixing and demographic characteristics between ethnic groups cause inequalities in transmission dynamics. MethodsWe analysed the association between the ethnicity and the number of contacts of 12,484 participants in the 2024-2025 Reconnect social contact survey, using a negative binomial regression model. We simulated respiratory pathogen epidemics using a compartmental model stratified by age, ethnicity, and contact levels, at a national level and in major cities in England. FindingsAfter adjusting for demographic variables, participants of Black and Mixed ethnicities had more contacts than those of White ethnicity (rate ratios (RR): 1.18 [95% Credible Interval (CI): 1.11-1.26], and 1.31 [95% CI: 1.14-1.52]). Participants of Asian ethnicity had fewer contacts (RR: 0.85 [95% CI: 0.79-0.91]). In national-level simulations, individuals of White ethnicity had the lowest attack rates due to demographic differences and mixing patterns. Local demographic structures changed simulated dynamics: attack rates in individuals of Black and Mixed ethnicities were approximately double those of White ethnicity in Birmingham, but less than 60% higher in Liverpool. InterpretationDemographic characteristics and mixing patterns create inequalities in transmission dynamics between ethnicities, while local demographic characteristics and pathogen infectiousness change the expected relative burden. To ensure mitigation strategies are effective and equitable, their evaluation must explicitly account for inequalities arising from local context. FundingMedical Research Council, National Institute for Health and Care Research, Wellcome Trust Research in context Evidence before this studyWe searched PubMed for population-based studies quantifying differences in respiratory infections between ethnic groups, up to 1 April 2026, with no language restrictions. Keywords included: (respiratory pathogens OR influenza OR COVID-19) AND (ethnic* OR race) AND (inequ*) AND (compartmental model OR incidence rate ratio OR hazard ratio). We excluded studies that focused on non-respiratory pathogens (e.g. looking at consequences of COVID-19 on incidence of other pathogens). A population-based cohort study showed that influenza infection risk was higher in South Asian, Black, and Mixed ethnic groups compared to White ethnicity in England. Another population-based cohort study highlighted that during the first wave of COVID-19 in England, the South Asian, Black, and Mixed ethnic groups were more likely to test positive and to be hospitalised than the White ethnic group. Census data in England showed that the distributions of age, household size, household income and employment status differed between ethnic groups, and the recent Reconnect social contact surveys highlighted the impact of each demographic factor on the participants number of contacts. Added value of this studyOur study shows that social contact patterns, mixing, and demographic structure all lead to unequal infection risk between ethnic groups in respiratory pathogen epidemics. Using the largest available social contact survey in England, we show that both the average number of contacts and the proportion of high-contact individuals varied by ethnic group, even after adjusting for participants demographics. These differences, together with mixing patterns and age structure, led to lower expected incidence among individuals of White ethnicity than in all other ethnic groups in simulated outbreaks. The level of inequality between ethnic groups changed when we used different values of pathogen transmissibility. Finally, as ethnic composition and population structure differ between cities in England, our results show differences in expected inequalities at a local level. Implications of all the available evidenceInequalities in infection risk between ethnic groups are context- and pathogen-dependent. They arise from both local population structure and contact patterns. Detailed information on mixing between groups and population structure is needed to accurately measure group-specific infection risk. These findings indicate that public health interventions based only on national-level estimates conceal regional variation in risk and may ultimately increase inequalities. Public health interventions need to be tailored to local contexts to be equitable and effective. Finally, our findings provide a foundation for understanding the progression from infection-risk inequalities to disparities in disease presentation and clinical outcomes.
CHOUHAN, P.; Zavala-Romero, O.; Haseeb, M.
Show abstract
Invasive insect species pose serious threats to agriculture and ecosystems, with their spread increasingly accelerated by global trade and climate change. To support prevention and mitigation efforts, it is essential to map the regions where these pests can survive and thrive. Here, we apply MaxEnt, a leading species distribution modeling framework, to estimate current (2020) and future (2040-2060) suitable habitats for five major invasive insects across the contiguous United States: brown marmorated stink bug, corn earworm, spongy moth, root weevil, and spotted lanternfly. To account for an uncertain climatic future, these projections are generated under four shared socioeconomic pathways, which reflect a range of plausible climate change scenarios. Beyond forecasting distributions, we examine several key modeling decisions, especially those often overlooked in practice. In particular, we find that background sampling strategies play a critical role in model calibration and that a hybrid sampling approach with a moderate buffer bias provides better predictive accuracy. We also show that permutation importance scores, commonly used to rank environmental variables, are highly sensitive to small changes in the background data and should be interpreted with caution. Finally, to bridge the gap between ecological modeling and applied machine learning, we provide a self-contained, math-focused background to MaxEnt aimed at practitioners outside of traditional ecological fields. Overall, this work delivers reproducible modeling workflows and critical insights into building robust, transparent, and ecologically meaningful MaxEnt models for climate-informed species distribution analysis.
Musonda, R.; Ito, K.; Omori, R.; Ito, K.
Show abstract
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has continuously evolved since its emergence in the human population in 2019. As of 1st August 2025, more than 1,700 Omicron subvariants have been designated by the Pango nomenclature system. The Pango nomenclature system designates a new lineage based on genetic and epidemiological information of SARS-CoV-2 strains. However, there is a possibility that strains that have similar genetic backgrounds and the same phenotype are given different Pango lineage names. In this paper, we propose a new algorithm, called FindPart-w, which can identify groups of viral lineages that share the same relative effective reproduction numbers. We introduced a new lineage replacement model, called the constrained RelRe model, which constrains groups of lineages to have the same relative effective reproduction numbers. The FindPart-w algorithm searches the equality constraints that minimise the Akaike Information Criterion of constrained RelRe models. Using hypothetical observation count data created by simulation, we found that the FindPart-w algorithm can identify groups of lineages having the same relative effective reproduction number in a practical computational time. Applying FindPart-w to actual real-world data of time-stamped lineage counts from the United States, we found that the Pango lineage nomenclature system may have given different lineage names to SARS-CoV-2 strains even if they have the same relative effective reproduction number and similar genetic backgrounds. In conclusion, this study showed that viruses that had the same relative effective reproduction number were identifiable from temporal count data of viral sequences. These findings will contribute to the future development of lineage designation systems that consider both genetic backgrounds and transmissibilities of lineages.
Billet, L. S.; Skelly, D. K.; Sauer, E. L.
Show abstract
Pathogens that persist subclinically across many wildlife populations can drive mass mortality in others. Mass mortality is often abrupt, and the timing can be difficult to predict from host or habitat features alone. In a recent field study tracking ranavirus epizootics in wood frog (Rana sylvatica) breeding ponds, we found that no environmental or biotic feature reliably predicted die-off occurrence or timing. Instead, the trajectory of viral accumulation in the water column was the strongest dynamic predictor of mass mortality. Infected hosts shed virus throughout epizootics, but the influence of waterborne viral concentration on disease progression was apparent only near die-off onset. This pattern suggests a potential threshold-dependent feedback operating through the shared viral environment. Here, we develop a compartmental model linking waterborne viral concentration to the rate at which subclinical infections progress to clinical, high-shedding states within already-infected hosts. We show that a dose-dependent progression model generates the two-phase epizootic trajectory observed in natural die-offs: prolonged subclinical circulation followed by abrupt clinical transition after environmental virus crosses an escalation threshold. The model exhibits a sharp phase transition between subclinical circulation and mass mortality, governed mainly by the clinical-to-subclinical shedding ratio, host density, and pond volume. Existing explanations for die-off variation emphasize individual-level susceptibility, but our model demonstrates that dose-dependent environmental feedback, a mechanism not previously formalized at the population level, can generate the transition from subclinical infection to mass mortality without invoking individual variation in host susceptibility. This mechanism may apply in any system where hosts share a bounded environment, pathogen dose influences disease severity, and pathogen shedding increases with disease progression.
Unegbu, U. L.
Show abstract
Background: Nigeria bears one of the highest maternal mortality burdens globally, with skilled birth attendance (SBA) remaining critically low in many regions. Understanding the independent determinants of SBA is essential for designing targeted interventions. Methods: This cross sectional study analyzed 21,465 births from the 2018 Nigeria Demographic and Health Survey (NDHS), a nationally representative household survey using stratified two stage cluster sampling. SBA was defined as delivery attended by a doctor, nurse, midwife, or auxiliary midwife. Multivariable logistic regression was used to estimate adjusted odds ratios (aOR) with 95% confidence intervals for the associations between SBA and maternal education, household wealth, place of residence, geopolitical region, maternal age, parity, and antenatal care (ANC) utilization, after accounting for confounding. Results: The overall prevalence of SBA was 44.9%. In the fully adjusted model, higher education (aOR = 7.01, 95% CI: 5.68-8.67), richest wealth quintile (aOR = 6.27, 95% CI: 5.27-7.46), and attending [≥]4 ANC visits (aOR = 3.80, 95% CI: 3.51-4.11) were the strongest independent predictors of SBA. Regional inequalities were pronounced, with SBA prevalence ranging from 17.7% in the North West to 85.6% in the South West. Crude effect estimates for education and wealth were substantially attenuated after adjustment, indicating large confounding by correlated socioeconomic factors. Conclusions: Maternal education, household wealth, ANC utilization, and geopolitical region are independent determinants of SBA in Nigeria. Scaling up ANC programs represents the most immediately actionable intervention, while long term gains require investment in girls' education and wealth equity. Targeted strategies for the northern regions are urgently needed. Keywords: skilled birth attendance, maternal mortality, Nigeria, DHS, antenatal care, logistic regression, health equity
Conteh, B.; Galagan, S. R.; Badji, H.; Secka, O.; Bar, B. T.; Rao, S. I.; Atlas, H.; Omore, R.; Ochieng, J. B.; Tapia, M.; Cornick, J.; Cunliffe, N.; Zegarra Paredes, L. F.; Colston, J.; Islam, M. T.; Mosharraf, M. P.; Qamar, F. N.; Fatima, I.; Pavlinac, P. B.; Hossain, M. J.
Show abstract
Globally, respiratory tract infections (RTI) are the main cause of morbidity, and in Low-middle-income countries (LMICs) RTI including pneumonia are a leading cause of morbidity and mortality in children <5 years. Diarrheal illness increases RTI risk in young children through micronutrient depletion, and immune stress, yet data on post-diarrhea RTI burden in LMICs are limited. We determined the prevalence and risk factors of RTI within three months following medically-attended diarrhea (MAD) in children aged 6-35 months enrolled in seven EFGH country sites in Asia, Africa and South America. The EFGH study prospectively enrolled children aged 6-35 months with MAD in selected health facilities during a 24-month period from 2022 to 2024 and followed them for three months. RTI was defined as cough or difficulty breathing and the presence of one of the following symptoms at any scheduled or unscheduled visit during follow-up: stridor; fast-breathing; oxygen saturation <90%; or chest indrawing. The period prevalence and 95% confidence intervals of RTI were calculated, and correlates of RTI were assessed using modified-Poisson regression. From June 2022 to August 2024, 9,476 children aged 6-35 months presenting with MAD in the EFGH study sites were screened: 9,116 (96.2%) included in the current study. Nearly half were female (46.7%), and median age was 15 months. Overall, 48.5% received all age-appropriate vaccines, and 87.6% received the pneumococcal vaccine, with significant variation across countries. Nearly one-quarter of children were stunted, 17.2% wasted, and 21.9% underweight. RTI occurred in 3.8% of children during the three-month follow-up, mostly within the first month. Higher prevalence of RTI occurred among children aged 12-23 months (8.7%), those undernourished (16.1%), unvaccinated (4.0%) or living in poor sanitation settings (4.1%). While children who received all age-appropriate or pneumococcal vaccinations had a lower crude prevalence of RTI, these associations were not statistically significant after adjusting for age, sex and study site. RTI was infrequently observed in the three months following MAD presentation, with significant variability by site and with the highest prevalence in Malawi. RTI risk was highest in 12-23-month-olds and among children with undernutrition, and those living in poor sanitation conditions.
Foster, J. R.; Pepin, K.; Miller, R.
Show abstract
O_LIThe management of invasive species often emphasizes removals to manage populations. However, evaluating the success of this management technique remains challenging, especially at large scales. Understanding the relationship between removal intensity and population growth is essential for determining when management achieves desired outcomes. C_LIO_LIWe used management removal data (removal resources [e.g. trapping] and relative effort [trap nights]) to estimate population density, demographic structure, and growth rates of invasive wild pigs (Sus scrofax domesticus) across a large landscape. From the management data and population estimates, we inferred population trajectories in the absence of removals and quantified the proportion of the population removed by the most widely used methods to control wild pigs. We then compared observed removal intensities and population growth rates to predict expected population trajectories immediately after management occurs. C_LIO_LIResults suggest substantial spatial and temporal variation in wild pig growth rates and variation in the effectiveness of removal efforts. Additionally, removing wild pigs at higher densities had a greater effect on limiting population growth than removals conducted at lower densities, though both are important. However, on large properties, removal intensity was often insufficient to offset population growth, indicating that management effort does not scale to large areas. C_LIO_LIThese results demonstrate how removal data and population modeling can provide robust inference on population dynamics and management effectiveness, offering a scalable framework for evaluating and improving invasive species control programs. We also discuss the current limitation of how effort is defined for different large-mammal removal techniques, and offer potential solutions for a more complete definition, such as going beyond trap nights and including constraints on personnel, equipment, and logistics. C_LI
Hussain, A.; Hussain, S.; Bravo de Guenni, L.; Smith, R. L.
Show abstract
Ticks impose major health and economic losses on the livestock sector of Pakistan, yet uncertainty-aware maps of tick burden remain scarce. We focused on the two most common disease transmitting tick species, Rhipicephalus microplus and Hyalomma anatolicum, to produce exposure-adjusted district-level abundance estimates and predictions for unsampled areas in Punjab and Khyber Pakhtunkhwa (KPK). We compiled heterogeneous tick count records and standardized them per 100,000 animals. District-level climate and physiographic covariates were summarized via principal components analysis. Bayesian spatial models were fit in R-INLA using Gaussian likelihoods and BYM2 over a hybrid adjacency matrix. Competing non-spatial and spatial models were compared, and the best model was used to generate posterior predictions and 95% credible intervals for unsampled districts. Spatial models outperformed non-spatial alternatives and calibrated well. Model robustness was confirmed through eight independent 80/20 train-test splits, showing strong generalization with consistent predictions across seeds. For unsampled areas, R. microplus exhibited a pronounced north-south gradient with high predicted means but wide intervals in the northern highlands, indicating information gaps. H. anatolicum predictions were highest and most precise in southern Punjab. Sensitivity analysis highlighted a dominant spatial component, with modest contributions from PC1 and PC2. The Bayesian spatial models using the Besag-York-Mollie framework delivered comparable, exposure-adjusted tick abundance maps while quantifying uncertainty to guide surveillance. Results suggest a need for immediate control in confirmed hotspots and recommend targeted field sampling in high-uncertainty districts. The workflow generalizes to other vectors, pathogens, and regions for evidence-based livestock health planning.
Liu, Y.; Chen, Z.; Suman, P.; Cho, H.; Prosperi, M.; Wu, Y.
Show abstract
This study developed a large language model (LLM)-based solution to identify people at HIV risk using electronic health records. We transformed structured EHR data, including demographics, diagnoses, and medications, into narrative descriptions ordered by visit date and applied GatorTron, a widely used clinical LLM trained on 82 billion words of de-identified clinical text. We compared GatorTron with traditional machine learning models, including LASSO and XGBoost. We identified a cohort with 54,265 individuals, where only 3,342 (6%) had new HIV diagnoses. Our LLM solution, based on GatorTron, achieved excellent performance, reaching an F1 score of 53.5% and an AUC of 0.88, comparable to traditional machine learning approaches. Subgroup analysis showed that, across age, sex, and race/ethnicity groups, both LLM and traditional models achieved AUCs above 0.82. Interpretability analyses showed broadly consistent patterns across LLM models and traditional machine learning models.
Rehman, N.; Guyatt, G.; JinJin, M.; Silva, L. K.; Gu, J.; Munir, M.; Sadagari, R.; Li, M.; Xie, D.; Rajkumar, S.; Lijiao, Y.; Najmabadi, E.; Dhanam, V.; Mertz, D.; Jones, A.
Show abstract
BackgroundSustained retention in care supports continuous access to antiretroviral therapy, routine clinical monitoring, and long-term viral suppression. ObjectiveTo compare the effectiveness of interventions for improving retention in care among people living with HIV (PLHIV). DesignSystematic review and network meta-analysis Data sourcesPubMed, Embase, CINAHL, PsycINFO, Web of Science, and the Cochrane Library from 1995 to December 2024. Eligibility criteriaRandomised controlled trials (RCTs) evaluating interventions to improve retention in care, viral load suppression, or quality of life (QoL) among PLHIV, compared with standard of care (SoC) or other interventions. Data extraction and synthesisPairs of reviewers independently screened studies, extracted data, and assessed risk of bias using ROBUST-RCT. We conducted a fixed-effect frequentist network meta-analysis and rated interventions categories relative to SoC based on effect estimates effects and the certainty of evidence.. Dichotomous outcomes were summarized as odds ratios (ORs) with 95% confidence intervals (CIs), and continuous outcomes as mean differences (MDs) with 95% CI. ResultsEighty-four trials enrolling 107 137 PLHIV evaluated 13 intervention categories. For retention in care, five interventions supported by moderate or high certainty evidence proved superior to SoC: multi-month dispensing (OR 2.02, 95% CI 1.32 to 3.09), task shifting (OR 1.94, 95% CI 1.42 to 2.66), differentiated service delivery (OR 1.47, 95% CI 1.22 to 1.76), behavioural counselling (OR 1.36, 95% CI 1.21 to 1.54), and supportive interventions (OR 1.31, 95% CI 1.11 to 1.55). For viral load suppression, two interventions supported by moderate or high certainty evidence proved superior to SoC: task shifting (OR 2.07, 95% CI 1.25 to 3.43) and behavioural counselling (OR 1.34, 95% CI 1.11 to 1.67). Across outcomes, no intervention demonstrated convincing superiority over other active interventions. ConclusionsAmong 13 intervention categories, only a subset provided moderate or high-certainty evidence of superiority to the standard of care, and no superiority to other interventions. Persistent evidence gaps for key populations, diverse settings, and long-term outcomes support the need for context-sensitive and patient-centred interventions. RegistrationPROSPERO CRD42024589177 Strengths and limitations of this study[tpltrtarr] This systematic review followed Cochrane methods and was reported in accordance with PRISMA-NMA guidelines. [tpltrtarr]The network meta-analysis integrated direct and indirect evidence to compare multiple intervention categories within a single framework. [tpltrtarr]Risk of bias and certainty of evidence were assessed using ROBUST-RCT and the GRADE approach for network meta-analysis, respectively. [tpltrtarr]Some networks were sparse, and limited representation of key populations and long-term follow-up constrained the strength and generalisability of inferences.
Essex, R.; Lim, S.; Jagnoor, J.
Show abstract
BackgroundDrowning remains a major global public health challenge. This study examined whether the timing and trajectories of urbanisation--beyond the current built environment--are associated with subnational drowning mortality. MethodsWe linked satellite-derived measures of built-environment change (GHSL), population crowding (WorldPop), surface water exposure (JRC Global Surface Water), and infrastructure proxies (VIIRS/DMSP nighttime lights) to GBD 2021 drowning mortality estimates across 203 ADM1 regions in 12 countries (2006-2021; 3,248 region-year observations). Temporal predictors captured recent expansion, development "newness" ([≤]10-year built share), acceleration/volatility, and a crowdingxgrowth interaction. We screened predictors using LASSO (10-fold cross-validation) and fitted mixed-effects models with region random intercepts. Distributed-lag models tested temporal precedence and development age, and income-stratified models assessed heterogeneity. ResultsAdding temporal predictors improved fit beyond contemporaneous built-environment measures ({Delta}AIC=177; {Delta}BIC=147). In adjusted models, crowdingxgrowth was strongly positively associated with drowning mortality, and a higher share of recent development was associated with higher mortality. Lag models showed a development age gradient: older built environment was most protective. Associations differed by income group, with several key coefficients reversing sign across strata. DiscussionDrowning mortality appears shaped by development histories as well as present-day conditions, with risk concentrated in rapidly changing, dense settings and the newest built environments. Cross-context heterogeneity suggests mechanisms and prevention priorities are unlikely to be uniform. ConclusionsDevelopment timing and trajectories help explain subnational drowning mortality beyond current built form alone. Prevention and planning should prioritise transition-period safety strategies in newly developing and rapidly densifying areas.
Person, E. S.; Andreadis, C. R.; Beaton, A. G.; Namunyak, A. N.; Kariuki, E.; Solheim, P.; Taylor, A.; Leimgruber, P.; Moraes, R. N.; Iaizzo, P. A.; Tung, J.; Pontzer, H.; Akinyi, M. Y.; Alberts, S. C.; van Dam, T. J.; Laske, T. G.; Archie, E. A.
Show abstract
O_LICardiac rate and rhythm reveal how animals adapt physiologically to day-to-day challenges, with consequences for health and fitness. However, these data remain difficult to collect in wild animals, despite their relevance for individual health and fitness. C_LIO_LIHere, we present a system for collecting and transmitting long-term, fine-scaled physiological data in wild animals. We implanted Bluetooth-enabled cardiac and physiological monitor devices in three wild adult female baboons in the Amboseli ecosystem in Kenya and paired these devices with collars that enabled remote data downloads over long-range wide area network (LoRaWAN). C_LIO_LIThe system performed well over >10 months, providing the first long-term cardiac data in wild primates. The baboons showed strong circadian patterns in heart rate, heart rate variability, and activity. We also present data on one female who left her social group for unknown reasons; while alone she exhibited higher heart rate variability, lower activity, and evidence of disrupted sleep. C_LIO_LIIn sum, physiologgers paired with low-energy methods of remote data retrieval are powerful tools for investigating physiology in wild animals on timescales that extend over many months, with minimal disruption to their behavior. C_LI
Priyanka, S. S.; Sujon, M. S. H.; Farzana, A.; Dasgupta, D. P.; Bhuyan, G. S.; Ali, N. B.
Show abstract
Dropout from essential maternal health services across pregnancy, childbirth, and the postnatal period remains a major barrier to improving maternal and neonatal outcomes in Bangladesh. This study examined stage-specific dropout patterns along the maternal continuum of care and identified factors associated with discontinuation. We analysed nationally representative data from the Bangladesh Demographic and Health Survey 2022 for 5,162 women with a recent live birth. Dropout from antenatal care, skilled birth attendance, and postnatal care was examined using multivariable logistic regression to estimate adjusted odds ratios and 95% confidence intervals, with comparisons to BDHS 2017-18 and assessment of regional variation. Only 44% of women received four or more antenatal care visits. Of these, 33% delivered with a skilled birth attendant, and among those receiving both antenatal care and skilled delivery, only 15% received postnatal care within 48 hours. Overall, 57% dropped out before completing adequate antenatal care, with additional dropouts between antenatal care and delivery (10%) and between delivery and postnatal care (18%). Compared with 2017-18, overall dropout from the maternal continuum of care more than doubled in 2022 (5.0% to 11.7%), driven by increased antenatal care dropout, while skilled birth attendance dropout declined and postnatal care dropout increased slightly. Higher maternal education, household wealth, media exposure, and womens decision-making power were consistently associated with lower odds of dropout, whereas higher birth order increased dropout risk. Substantial regional variation was observed, with the highest overall dropout in Sylhet and the lowest in Khulna. High dropout from the maternal continuum of care in Bangladesh occurs predominantly at the antenatal care stage and is shaped by socioeconomic status, birth order, womens access to information, and regional disparities. Strengthening early antenatal engagement and womens decision-making autonomy is critical to improving continuity of maternal care and reducing preventable maternal and neonatal risks.
Hussain, A.; Bravo de Guenni, L.; Mateus-Pinilla, N. E.; Smith, R. L.
Show abstract
Tick-borne diseases are now reported from nearly every county in Illinois, and three vector tick species (Amblyomma americanum, Dermacentor variabilis, and Ixodes scapularis) are of particular concern because these are responsible for most of the tick-borne disease transmission in the state. However, active surveillance is patchy, many counties have little or no sampling, and there is no statewide, quantitative map of relative abundance that can be used to anticipate risk in unsampled areas. To address these gaps, we developed Bayesian hierarchical spatial models to estimate the county-level abundance of these three vector tick species in Illinois. Using active surveillance data from 2019-2022, we modeled county-level abundance as a function of climate, land cover, forest fragmentation, and deer habitat suitability. Spatial dependence was captured using a Besag-York-Mollie 2 (BYM2) prior implemented in INLA, along with spatial 5-fold cross-validation to assess predictive performance. A. americanum showed the highest predicted abundance in southern and central Illinois, D. variabilis was widespread but diffuse, and I. scapularis was concentrated in northern and selected central counties. Together, these models provide the first spatial, statewide, uncertainty-aware assessment of tick abundance in Illinois, highlighting priority counties where surveillance lags disease risk.