Transfusion
○ Wiley
Preprints posted in the last 30 days, ranked by how well they match Transfusion's content profile, based on 14 papers previously published here. The average preprint has a 0.05% match score for this journal, so anything above that is already an above-average fit.
Mitchell, S. T.; Spyker, D.; Robbins, G.; Rumack, B.
Show abstract
Amatoxin-induced acute liver failure complicates misidentified foraged mushroom ingestion worldwide; abrupt multisystem collapse punctuates apparent improvement. Our prospective single-arm clinical trial investigated proactive toxicokinetic-based management to preserve elimination capacity: sustained enhanced hydration to maintain renal clearance; fasting plus octreotide to suppress meal-driven enterohepatic circulation; and intravenous silibinin to inhibit OATP1B3-mediated hepatic uptake, enabling safe passage and elimination of gallbladder-confined amatoxin-laden bile. Safety population (N=99) transplant-free recovery (TFR): 88.0% (87 recoveries, 6 transplants, 6 deaths). Protocol-adherent Efficacy population (n=86) TFR: 98.8% (85 recoveries, 1 transplant, 0 deaths). Multivariable analysis identified uninterrupted hydration as strongest TFR predictor (P<0.001), followed by earlier silibinin initiation (P=0.003); octreotide shortened INR recovery by 11 hours (P=0.033). These findings support a toxin elimination model in which preserved renal clearance and biliary sequestration are central recovery determinants. The kinetic balance between renal clearance and hepatic uptake governs both recovery and collapse.
Elmsjö, A.; Söderberg, C.; Tamsen, F.; Green, H.; Kugelberg, F. C.; Ward, L. J.
Show abstract
BackgroundFatal insulin intoxication remains difficult to diagnose because insulin undergoes rapid degradation after death, limiting the reliability of direct biochemical measurements. This creates diagnostic uncertainty when objective molecular confirmation of insulin excess are required. We hypothesised that insulin excess induces systemic metabolic alterations that persist beyond insulin degradation and can be captured using postmortem metabolomics in a forensic setting. MethodsHigh-resolution mass spectrometry (HRMS)-based metabolomics was applied to a national cohort comprising 51 fatal insulin intoxications. Orthogonal partial least squares-discriminant analysis (OPLS-DA) models were trained on cases collected between 2017-2022 to identify insulin-associated metabolite features using a shared-and-unique-structures approach. Performance was evaluated using two temporally distinct test sets (2023-2024): a matched validation cohort and a heterogeneous forensic cohort reflecting biological variability. ResultsHere we show that an insulin-associated metabolomic fingerprint comprising 91 features demonstrated reproducible discrimination across independent cohorts. In the matched cohort (n=59, including 14 insulin cases), insulin intoxication classification achieved 100% sensitivity and 73% specificity within the applicability domain. In the heterogeneous cohort (n=154, including 14 insulin cases), 100% sensitivity was maintained with a 72% specificity despite increased biological variability. Univariate analyses demonstrated significant alterations across multiple metabolite classes, including acylcarnitines, fatty acids/lipids, and purine/nucleoside metabolites, with moderate effect sizes, consistent with systemic effects of insulin-induced hypoglycaemia. ConclusionsFatal insulin intoxication is associated with a reproducible metabolomic fingerprint detectable after death. These findings demonstrate that postmortem metabolomics may serve as a complementary decision-support tool when conventional biomarkers are unreliable.
Masvosva, W.; Haikonen, R.; Gunnar, T. O.; Lehtonen, M.; Keski-Nisula, L.; Rysa, J.; Karkkainen, O.
Show abstract
Maternal smoking during pregnancy is associated with adverse effects on offspring health through impaired placental structure and function. Nicotine and other tobacco-related compounds readily cross the placental barrier, disrupt metabolic pathways, and increase the risk of long-term developmental disorders in newborn. Here, placental metabolic alterations associated with maternal smoking exposure were examined with metabolomics. We used placental samples from the Kuopio Birth Cohort study from 23 nonsmoking controls pregnancies, 19 pregnancies with early smoking exposure (cotinine detected in first-trimester but not in at-term samples), and 13 pregnancies with continuous smoking-exposure (cotinine detected in both first-trimester and at-term samples). Differences in placental metabolomic profiles were seen between controls and both smoking-exposed groups. For example, increased activity of xenobiotic metabolism pathways showed as elevated CYP1A2-related metabolites, e.g., aminoamide local anesthetic metabolite detected in both smoking-exposure groups (p=0.0042 and 0.0019, respectively). Disruptions in amino acid metabolism were observed, e.g., reduced placental tryptophan levels (p=0.0209 and 0.0237). Placentas from women who quit smoking during showed markers of reduced oxidative stress, lower oxidized glutathione (p=0.0119) and higher ergothioneine (p=0.0426) levels. These findings indicate that many smoking-related effects on the placental metabolome persist beyond acute nicotine exposure, showing long-term biological effects of maternal smoking during pregnancy. Plain language summarySmoking during pregnancy can possibly change how the placenta functions, which also affects the newborns long-term health. In this study, we compared placentas from nonsmokers, women who quit during pregnancy, and those who kept smoking. Clear chemical differences were seen in the placentas of smoking exposed pregnant women. The main changes included lowered levels of tryptophan and glutathione, which are important for growth and protection from stress. These results show that smoking-related changes in the placenta can persist beyond active nicotine exposure.
Cremin, C.; Elavalli, S.; Paulin, L.; Arres Reche, J.; Saad, A. A. Y. A.; Attia, A.; Minas, C.; Aldhuhoori, F.; Katagi, G.; Wu, H.; Sidahmed, H.; Mafofo, J.; Soliman, O.; Behl, S.; Pariyachery, S.; Gupta, V.; Ghanem, D.; Sajjad, H.; Cardoso, T.; El-Khani, A.; Al Marzooqi, F.; Magalhaes, T.; Sedlazeck, F. J.; Quilez, J.
Show abstract
BackgroundThe hyperpolymorphic nature and structural complexity of the human leukocyte antigen (HLA) genomic region present challenges for accurate and scalable typing across diverse sample types. While wholegenome sequencing (WGS) offers the opportunity to infer HLA genotypes without targeted enrichment, systematic benchmarks across sequencing platforms, biospecimens and coverage levels remain limited. ResultsWe assembled a multi-platform resource of WGS datasets derived from short-read (Illumina, MGI) and long-read (Oxford Nanopore Technologies R9 and R10) sequencing, spanning 29 biospecimens including cell lines, blood, buccal swab and saliva. We evaluated the performance of the HLA caller HLA*LA across 13 HLA genes, using a clinically validated assay as reference. WGSbased HLA genotyping achieved [~]95% accuracy across sequencing platforms, with Class I loci exhibiting higher accuracy than Class II. Crossplatform concordance was high, and performance remained consistent across Illumina, MGI and Oxford Nanopore chemistries. Analysis of blood, buccal swab and saliva samples showed that blood and buccal swabs supported accurate HLA inference, whereas saliva yielded reduced concordance. Downsampling experiments demonstrated that 15x coverage was sufficient to retain >95% accuracy at twofield resolution, with lower depths supporting lower-resolution typing. ConclusionsOur results demonstrate that WGS provides a robust, platformagnostic framework for accurate HLA genotyping across sample types and coverage levels. These benchmarks establish practical conditions for reliable HLA inference and underscore the utility of WGS for populationscale HLA analyses and future clinical applications.
Coleman, T.; Mello, M.; Kazanjian, R.; Kazanjian, M.; Olsen, D.; Coleman, J.; Menna, J.
Show abstract
Frequent blood testing is a routine but burdensome reality for many children, particularly those with chronic, rare, or medically complex conditions. Repeated clinic, hospital, and laboratory visits can disrupt family life, increase stress for children and caregivers, and limit access to timely monitoring and research participation. Despite advances in pediatric care, blood collection has remained largely tethered to in-person clinical settings. This study validates a new model: safe, effective, parent-administered pediatric blood collection performed at-home. We evaluated the RedDrop ONE capillary blood collection device in a real-world, parent-administered home setting to determine whether non-clinical caregivers can reliably collect clinically meaningful blood samples from children without venipuncture, specialized training, or in-clinic support. Conducted under Institutional Review Board (IRB) oversight, this observational usability study enrolled 50 children aged 3-17 years across a geographically diverse U.S.-based pediatric population, including healthy and medically fragile children with chronic autoimmune and rare diseases. All study activities, including enrollment, consent, instruction, collection, and sample return, were completed remotely, reflecting real-world adoption conditions rather than controlled clinical environments. Parents successfully collected blood samples from their children at home with high consistency, low perceived pain, and strong overall acceptance. Across collections, blood and serum volumes were sufficient and reproducible, and laboratory analysis confirmed strong analytical concordance between samples collected from two different anatomical sites, arm and leg. Parents reported high confidence using the device, short collection times, and a high likelihood of completing collections on the first attempt. Importantly, both parents and children rated the overall experience as better than expected, and parents consistently reported that the RedDrop ONE experience was superior to traditional finger-prick and needle-based venous blood draws. Parents reported minimal child discomfort and greater flexibility by avoiding in-clinic phlebotomy visits. These benefits are especially meaningful for families managing chronic or rare pediatric conditions that require repeated blood monitoring. By enabling blood collection at-home, this model reduces travel burden, scheduling constraints, and procedural anxiety while maintaining analytical reliability. This study also demonstrated that parent-administered pediatric blood collection can support real-world clinical workflows beyond research. All samples were successfully shipped overnight at ambient temperature and processed by a CLIA-certified laboratory, supporting feasibility for remote pediatric patient monitoring and decentralized clinical trials. While lipid testing served as the representative clinical use case, the volumes and consistency achieved exceeded volume thresholds commonly required for advanced downstream applications, including proteomics, metabolomics, transcriptomics, and genomic analyses. Taken together, these findings validate parent-administered pediatric blood collection as a practical, scalable alternative to in-clinic phlebotomy for many use cases. By shifting blood collection from the clinic to the home, this approach has the potential to reduce reliance on in-person phlebotomy, integrate seamlessly into routine pediatric care, and expand access to monitoring and research for families who face geographic, logistical, or medical barriers. For health systems, researchers, and parents alike, this study supports a future in which clinically meaningful pediatric blood collection is no longer limited by healthcare facility location but instead centered on the child and family.
Wagle, U.; Sirur, F. M.; Lath, V.; Lingappa, D. J.; R, R.; Kulkarni, N. U.; Kamath, A.
Show abstract
Background The Hump-nosed pit viper is a recognized but neglected medically significant species causing morbidity and mortality, with non-availability of a specific antivenom. There are many gaps in our understanding of its envenomation, including burden, clinical syndrome, complications and management. Methodology The study is a retrospective sub analysis of the Prospective VENOMS registry and hospital records of Hump Nosed Pit Viper envenomation from a single tertiary care center in coastal Karnataka from May 2018 to March 2024. Epidemiology, syndrome, complications and treatment strategies have been described. A linear mixed model analysis was conducted to study the effect of different therapeutic interventions in combating venom induced consumptive coagulopathy (VICC) Principal Findings Of 46 cases, 24 patients had VICC. The most common complications were AKI (21.7%), TMA (10.9%) and stroke (4.4%). Anaphylaxis to ASV (23.9%) was the most common therapeutic complication. Therapeutic interventions included ASV, administration of blood products and therapeutic plasma exchange along with supportive care. The linear mixed model revealed that administration of blood products (p=<0.001) had the strongest influence on the INR value, however, often resulting in a transient decline in INR value. ASV (p=0.052) caused only marginally significant change in INR. The role of TPE could not be statistically inferred, however, individual cases with severe VICC improved without complications, therefore it required further study but can be considered in critical cases. Conclusions/Significance This study describes the syndrome of hump-nosed pit viper envenomation, while highlighting the urgent need for a species-specific antivenom, recommends treatment strategies that can be used in the interim. Additionally, geo-spatial mapping draws attention to hotspots and the hypothesis that HNPV in coastal Karnataka have regionally distinct toxicity trends.
Kravos, A.; Dolenc, B.; Fartek, N.; Locatelli, I.; Cebron Lipovec, N.; Rogelj Meljo, N.; Kos, M.; Dobovsek, T.; Panter, G.
Show abstract
Iron deficiency (ID) is the most common nutritional deficiency worldwide, often caused by insufficient dietary intakes. Oral supplementation is one of the means to improve iron status. This study evaluated the efficacy and safety of two low-dose iron supplements - >Your< Iron Forte Capsules (YIFC) and Ferrous Sulfate Capsules (FSC) - in individuals with dietary ID. One hundred and one participants (mean age 30.6 years; 98% women) with low iron stores (mean serum ferritin 16.1 {micro}g/L) were randomized to receive either YIFC or FSC once daily for 12 weeks. Changes in blood indices and iron-related parameters were assessed at four and 12 weeks of intervention relative to baseline. The primary outcome was the change in hemoglobin (Hb) after 12 weeks. Eighty-seven participants completed the study. Both supplements significantly increased Hb at 12 weeks (YIFC: mean 6.52 g/L, p<0.001; FSC: mean 5.71 g/L, p<0.001). Product-related adverse events (AEs) were few (17% of all AEs) and of mild to moderate intensity only. One participant receiving FSC withdrew due to a probable product-related AE. The frequencies of product-related AEs were similar between study arms, however, statistically significantly more AEs judged to be definitely related to the product occurred in in the FSC arm. While product-related AEs were confined to the gastrointestinal tract in the YIFC arm, they affected multiple organ systems in the FSC arm. Supplementation with either YIFC or FSC proved as an effective, well-tolerated, and safe strategy for improving iron status in non-anemic dietary iron deficiency. In terms of the AE profile, supplementation with YIFC may offer advantages over supplementation with FSC.
Kowada, A.
Show abstract
The risk of esophageal adenocarcinoma (EAC) in Barretts esophagus (BE) varies substantially by segment length and dysplasia grade. This study evaluated the cost-effectiveness and health impacts of dysplasia-stratified EAC surveillance strategies for the Japanese BE population. A state-transition model was developed comparing endoscopy, sponge test, breath test, and miRNA test with no surveillance from a healthcare payer perspective over a lifetime. Non-invasive strategies were assessed as primary surveillance tools, with positive results triggering confirmatory endoscopy, and a scenario analysis evaluated AI-assisted endoscopy. Five BE populations of 50-year-old individuals were modeled: ultra-short segment BE (USSBE), short-segment BE (SSBE), long-segment nondysplastic BE (LSBE-NDBE), LSBE with low-grade dysplasia (LSBE-LGD), and LSBE with high-grade dysplasia (LSBE-HGD). Each modality was evaluated at surveillance intervals of 1, 2, 3, 4, 5, or 10 years. Primary outcomes included net monetary benefits, costs, quality-adjusted life-years, incremental cost-effectiveness ratios, and EAC deaths, with sensitivity analyses assessing parameter uncertainty. Surveillance was not cost-effective for USSBE, SSBE, or LSBE-NDBE. For LSBE-LGD, annual endoscopy was most cost-effective, averting 83 EAC deaths per 10,000 individuals, while for LSBE-HGD, annual breath testing was most cost-effective, averting 295 deaths. These findings support dysplasia-specific surveillance in LSBE with implications for global surveillance practice.
Bowen, H. P.; O'Loughlin, G.; Drake, C.; Schleicher, C.; Schulthess, D.
Show abstract
BackgroundThe Most Favored Nation (MFN) policy is a mechanism that incorporates foreign prices to determine the maximum allowable net price for any branded drug within US government-funded healthcare. Two proposed rules, the Global Benchmark for Efficient Drug Pricing ("GLOBE") (90 Fed. Reg. 60,244) for Medicare Part B and the Guarding US Medicare Against Rising Drug Costs ("GUARD") (90 Fed. Reg. 60,338) for Medicare Part D, invoke the Center for Medicare and Medicaid Innovation Centers payment and service model demonstration and waiver authority, under Section 1115A of the Social Security Act (42 U.S.C. [§] 1315a), to calculate the US MFN price which is the lowest average price within a basket of specified foreign countries. Unlike voluntary manufacturer agreements, GLOBE and GUARD would mandate participation from all applicable manufacturers. MethodsWe derive MFNs potential impact on Medicare pricing from a proprietary dataset provided by IQVIA which contained net prices for the top 37 oncology products by total US sales from January 1, 2019 through June 30, 2025 ranked by total US sales in the following countries: Australia, Belgium, France, Germany, Ireland, Italy, South Africa, Spain, Switzerland, the UK, and the US. For each drug, we select the lowest GDP-adjusted international price from a basket of those countries within 60% of the US GDP per capita, adjusted for purchasing power parity, and calculate the reduction in US price required to match its MFN price, and hence the corresponding reduction in revenues under MFN. A retrospective Net Present Value (NPV) analysis is then used to address the counterfactual question of whether each drug would have been developed had MFN pricing been in place at the time of its FDA approval. ResultsUnder MFN, the average reduction in US prices across our drug cohort was 67%. Eighty-four percent of the 37 cancer drugs in our cohort evidenced a negative NPV if MFN had been in place at the time of their FDA approval and the commercial market is impacted. When the analysis is restricted to MFNs impact on Medicare, the indications for these lost drugs have a total US population of 2.4 million patients. When the analysis is combined across the Medicare and commercial markets, the loss of lead indications impacts over 15 million US patients. ConclusionsMandatory MFN policies reduce the financial incentives required to develop cancer medicines; our projections show a substantial decline in new cancer drug launches and will likely lead companies to pursue indications for populations outside Medicares authority. If so, MFN will reduce the number of new therapies for the very population the Executive Orders are allegedly designed to aid: the Medicare-aged population who require effective new therapies in areas of high unmet medical need, such as late-stage cancers. This creates the perverse outcome of a policy nominally designed to help Medicare beneficiaries by instead redirecting innovation away from their most urgent therapeutic needs.
Cassim, N.; Stevens, W. S.; Glencross, D. K.; Coerzee, L.-M.
Show abstract
BackgroundIn 2004, South Africas public health system faced the dual challenge of rapidly scaling up antiretroviral therapy (ART) while reducing the cost of laboratory monitoring. At the time, conventional CD4 testing methods were expensive, labour-intensive, and impractical for sustaining a national testing network. This study aimed to assess the financial impact and cost savings associated with the implementation of the PanLeucogated CD4 (PLG/CD4) enumeration method between 2004 and 2024 in the public-sector in South Africa. MethodsA longitudinal cost analysis was conducted using annual test volumes and state tariffs for PLG/CD4 testing and the 4-colour CD3/CD4/CD8/CD45 T-cell enumeration reference method. Annual cost savings were calculated in United States Dollars (USD) by applying historical South African Rands (ZAR) to United States Dollars (USD) exchange rates. The state prices for tariff codes PLG/CD4 and the reference method were provided by calendar year in ZAR and converted to USD based on the prevailing exchange rate. The USD test prices were multiplied by annual test volumes. Cost savings were calculated by multiplying annual test volumes and the difference in test prices in USD (difference between PLG/CD4 and the reference method). ResultsThere were 50,745,848 PLG/CD4 tests performed over 20-years. The cost-per-test of PLG/CD4 was consistently lower than the reference method, ranging from $4,06 to $9,40, compared to $13,06 to $28,21. Cumulative national savings amounted to USD 626 million. The peak annual savings of $64,6 million occurred in 2011, coinciding with the height of ART enrolment. Cost savings persisted despite a doubling in the exchange rate over the study period. ConclusionThe PLG/CD4 implementation enabled cost-efficient, scalable, quality-assured CD4 testing as part of the national HIV response, reducing reliance on complex/costly technologies while improving coverage. These findings support the critical role of context-specific diagnostic innovation to strengthen health system resilience.
Senanayake, S.; Lee, S. Y. A.; Kularatna, S.; Win, T. M.; Lee, A.; Lau, Y. H.; Hausenloy, D. J.; Yeo, K. K.; Chan, M. Y.-Y.; Wong, R. C. C.; Loh, S. Y.; Sim, D.; Weien, C.; Tan, K. B.; Tan, N. C.; Graves, N.
Show abstract
BackgroundQuadruple therapy, comprising an angiotensin receptor-neprilysin inhibitor (ARNI), {beta}-blocker, mineralocorticoid receptor antagonist (MRA), and sodium-glucose cotransporter 2 inhibitor (SGLT2i), is guideline-recommended for heart failure with reduced ejection fraction (HFrEF). However, uptake in Singapore remains low. This study evaluated the cost-effectiveness of scaling up quadruple therapy from the current 30% uptake to realistic (80%) and stretch (100%) targets. MethodsWe developed a decision-analytic model combining a decision tree and Markov structure to simulate clinical and economic outcomes over a 10-year horizon from the Singapore healthcare system perspective. Transition probabilities were estimated using local real-world data for current regimens, and published literature for quadruple therapy. Costs were derived from hospital billing data and drug utilisation patterns. A probabilistic sensitivity analysis (1,000 simulations) assessed uncertainty. The willingness-to-pay (WTP) threshold was S$45,000 per quality-adjusted life year (QALY) gained. ResultsBoth scale-up scenarios were cost-effective. Compared to current practice, the 80% uptake scenario resulted in an incremental cost of S$2.57M and 110 additional QALYs (ICER: S$23,392/QALY) for 1000 patients over 10 years, while the 100% uptake scenario yielded 137 QALYs at an incremental cost of S$2.88M (ICER: S$21,117/QALY). Under conservative assumptions, both scenarios remained cost-effective. The probability of being cost-effective was 92% (80% uptake) and 96% (100% uptake). InterpretationScaling up quadruple therapy for HFrEF in Singapore is highly cost-effective. Implementation strategies to close the treatment gap should be prioritised to improve outcomes and maximise value in heart failure care.
Gernand, A. D.; Walker, R.; Pan, Y.; Mehta, M.; Sincerbeaux, G.; Gallagher, K.; Bebell, L. M.; Ngonzi, J.; Catov, J. M.; Skvarca, L. B.; Wang, J. Z.; Goldstein, J. A.
Show abstract
BackgroundPlacental growth and function are imperative for healthy fetal growth; data on placentas can inform research and clinical care. Measuring placental size after delivery should be easy, but current methods are hard to standardize and error prone. We developed PlacentaVision using artificial intelligence (AI)-based models, to automatically, accurately, and precisely measure placentas from digital photographs. ObjectiveWe aimed to compare placental disc morphology between gross pathology examination (human measurements) and our automated PlacentaVision model (AI measurements). MethodsPlacentaVision is a multi-site study to assess placental morphology, features, and pathologies from digital photographs. We built a large dataset of digital placenta photographs and clinical data from singleton births at three large hospitals: Northwestern Memorial (Chicago; n=24,933), UPMC Magee-Womens (Pittsburgh; n=1198) and Mbarara Regional Referral (Uganda, n=1715). Data and images were from the medical record for Northwestern, part of a biobank study for Magee, and from our prospective studies for Mbarara. We compared long and short disc axis length (defined by Amsterdam criteria) between human and AI-based PlacentaVision measurements by calculating the difference and using Bland-Altman; we stratified by site, disc shape, infant sex, and term/preterm birth. ResultsMean (SD) disc length was 19.2 (3.1) and 18.6 (3.1) cm from PlacentaVision and human measurement, respectively, with a difference of 0.57 (2.19) cm. Disc width was 16.3 (2.3) cm and 16.1 (2.4) cm from PlacentaVision and human measurement, respectively, with a difference of 0.25 (1.85) cm. Bland-Altman limits of agreement were -3.7 to 4.9 cm for length and -3.4 to 3.9 cm for width. Irregularly-shaped placentas had a greater difference between PlacentaVision and human measurements compared to those with round/oval shapes (length differences of 1.53 and 0.45 cm respectively). Further, there were length differences by site (Northwestern 0.6, Magee 0.0, and Mbarara 0.4) and gestational age at birth (preterm 0.71, term 0.53 cm), but similar results for male and female placentas. Results for width were similar to length. ConclusionsAI-based measurements were less than a cm from human measurements overall. Our findings of larger differences for irregular shapes and preterm may indicate it is difficult for humans to measure irregular or small placentas according to protocol. PlacentaVision can automate and standardize the process.
Yu, J.
Show abstract
Vaccination frequently elicits suboptimal immunogenicity in organ transplant recipients, particularly those on long-term immunosuppressive therapy, highlighting the need for improved understanding of immunosuppression mechanisms and optimized vaccination strategies. This study enrolled a cohort of 132 individuals and observed significantly lower antibody levels in kidney transplant recipients (KTRs) compared to non-transplant controls (non-KTRs). Antibody levels were inversely associated with both the dosage and duration of immunosuppressive therapy. Complementary small animal studies demonstrated that immunosuppressive treatment dosage-dependently and reversibly impaired antibody production, primarily by depleting immune cells, notably B cells. A single shot of adenoviral vector-based vaccines demonstrated enhanced immunogenicity relative to two shots of alum-adjuvanted protein vaccines, inducing potent neutralizing antibodies (NAbs) and a Th1-biased T-cell response even under continuous immunosuppression. The enhanced response was driven by reduced interference from pre-existing antibodies, sustained transgene expression, and the reprogramming of lipid metabolism to activate T and B cells. Our findings advocate for tailored vaccination strategies, positioning adenoviral vectors as a candidate modality for this vulnerable population.
Rodrigues dos Santos, J. P.; Montazeri, N. X.; Perovic, T.; Kendziorra, E.
Show abstract
Cryopreservation, or cryonics, is an experimental procedure that preserves individuals at cryogenic temperatures after legal death in the hope of future revival. Although Switzerland hosts Schengen Areas first dedicated cryopreservation facility, public sentiment toward the practice has remained largely unexamined. This exploratory survey of 249 Swiss adults assessed awareness, ethical views, and openness to cryopreservation. Results show broad support for individual autonomy, with most respondents endorsing the right to choose cryopreservation when performed to high medical standards (86.7%) and not supporting legal restrictions (83.5%). While personal interest was in the minority, nearly one in five respondents (20.1%) reported active interest or intent to sign up. Openness to cryopreservation appears driven more by values such as life-extension preference and prior exposure than by demographics. These findings provide the first empirical snapshot of Swiss public opinion on cryopreservation, highlighting a largely permissive public stance and suggesting considerable engagement with the topic.
Mekniran, W.; Bruegger, V.; Fuchs, M.; Jin, Q.; Wirth, B.; Bilz, S.; Braendle, M.; Fleisch, E.; Kowatsch, T.; Jovanova, M.
Show abstract
ObjectivesDigital biomarkers offer scalable screening for type 2 diabetes, yet adoption is stalled by uncertainty regarding economic viability. This study evaluates the cost-effectiveness and budget impact of digital screening compared to opportunistic screening from a Swiss payer perspective. MethodsA probabilistic Markov cohort model was developed to simulate at-risk Swiss adults (age [≥]45, BMI [≥]25 kg/m{superscript 2}) over a 40-year horizon. The model incorporates a digital attrition parameter, inputs derived from Swiss-specific sources (e.g., the CoLaus study and FSO life tables), and statutory tariffs. Costs and outcomes were discounted at 3.0%. ResultsIn the deterministic base-case, digital screening yielded an incremental cost-effectiveness ratio of CHF 2,912 per quality-adjusted life-year gained. Probabilistic sensitivity analysis indicated a 93.2% probability of cost-effectiveness at the CHF 50,000 threshold. The budget impact analysis estimated a Year 1 gross investment budget of CHF 27 million to identify prevalent cases, followed by long-term savings from averted complications. ConclusionsDigital screening can be highly cost-effective in Switzerland. While the required Year 1 gross investment poses a liquidity challenge, reimbursement via pathway-oriented models under the Swiss tariff could align incentives with long-term complication avoidance.
Pagliuca, S.; Mooyaart, J. E.; Ayuk, F.; Zeiser, R.; Potter, V.; Dreger, P.; Bethge, W.; Hilgendorf, I.; Michonneau, D.; Rambaldi, A.; Sengeloev, H.; Passweg, J.; Richardson, D.; Gedde-Dahl, T.; Kinsella, F.; Edinger, M.; Mielke, S.; Eder, M.; Andreani, M.; Crivello, P.; Merli, P.; Hoogenboom, J. D.; de Wreede, L. C.; Chabannon, C.; Kuball, J.; Gurnari, C.; Fleischhauer, K.; Ruggeri, A.; Lenz, T. L.
Show abstract
Allogeneic hematopoietic cell transplantation (allo-HCT) hinges on a delicate trade-off between graft-versus-tumor control and graft-versus-host disease (GvHD), mediated by donor T-cell recognition of antigens presented by recipient human leukocyte antigen (HLA) molecules. We hypothesized that, beyond allele-level matching, sequence divergence at peptide-binding grooves across donor and recipient HLA loci shapes these responses. To this end, we evaluated the effect of HLA evolutionary divergence (HED), a metric quantifying amino acid variability at HLA peptide-binding sites, on selected hematological malignancies in 4,695 patients undergoing allo-HCT from a 9/10 mismatched unrelated donor (MMUD), reported to the EBMT database. We examined (i) locus-specific recipient HED (HED-R) and (ii) "HED-mismatch" (HED-MM), capturing immunopeptidome divergence at the mismatched locus. While dichotomous mismatch status explained differences in survival and acute GvHD risk (with overall greater detriment for class I loci), HED metrics uncovered substantial within-mismatch heterogeneity. In DRB1 mismatched subgroup, HED-MM at this locus, independently predicted inferior relapse-free survival (RFS) with an attenuating time-dependent association, further modulated by cross-locus HED-R. In this subgroup, higher HED-R at HLA-A and HLA-C associated with increased risks of acute GvHD and non-relapse mortality, respectively. Among HLA-B-mismatched pairs, higher DRB1 HED-R associated with worse overall survival (OS) and RFS and higher relapse risk. In the HLA-A-mismatched subgroup, higher HED-R at HLA-A increased chronic GvHD risk. Collectively, HED-derived metrics complement conventional mismatch classification by capturing qualitative differences in donor-recipient immunopeptidome interactions and reveal a complex, non-linear interplay among alleles across mismatch subgroups that modulates the clinical impact of mismatching. KeypointsO_LIIn mismatched unrelated HCT, baseline risk varies across mismatch constellations, with class I mismatches more detrimental than class II. C_LIO_LIHED complements conventional HLA mismatch classification by capturing qualitative donor-recipient immunopeptidome interactions. C_LI
Daniels, B.; Zhang, W.; Nguyen, H.; Duong, D.
Show abstract
We developed and validated a self-administered clinical vignette platform powered by a large language model (LLM), deployed through a SurveyCTO web survey, to measure primary health care provider competencies in Vietnam. In a pilot focus group, nine physicians rated LLM-simulated patient interactions as realistic (mean 3.78/5) and user-friendly. In the validation phase, 22 providers completed 132 vignette interactions across ten clinical scenarios in Vietnamese. Essential diagnostic checklist scores (human-coded from translated transcripts) correlated with expert clinician evaluations (Pearsons{rho} = 0.55-0.60). LLM-automated coding of checklist items from translated English transcripts correlated reasonably with human coding ({rho} = 0.53), and coding directly from Vietnamese transcripts performed comparably ({rho} = 0.51), suggesting that a separate translation step may not be necessary. The total cost of 132 chatbot interactions was under USD 2. LLM-driven conversational vignettes represent a low-cost and scalable method for assessing provider competencies in respondents local language, eliminating the need for extensive enumeration staffs while preserving the open-ended format critical to vignette validity, and additionally introducing flexible feature extraction from transcripts using grading rubrics. The platform is open-source and designed for replication in other health system contexts. Author summaryMeasuring the clinical skills of healthcare providers is essential for improving the quality of care, but current survey methods are expensive and require trained enumerators to travel to health facilities in person. We developed a new approach that uses large language models (LLMs) - the technology behind tools like ChatGPT and Claude - to simulate patients in realistic clinical conversations that healthcare providers can complete on their phones or laptops over the Internet in their own language. In Vietnam, we tested this tool with 31 physicians across ten clinical scenarios. Providers found the simulated patient conversations realistic and easy to use. We also tested whether LLMs could automatically score the conversations, which showed reasonable agreement with human scoring, and performed nearly as well when scoring directly from Vietnamese, without requiring a separate translation step. When we compared these results from our tool against holistic expert physician ratings of the same conversations, the scores agreed well, suggesting that automatic transcript grading based on rubrics produces meaningful measures of clinical skill. This tool costs less than two US dollars for over a hundred consultations and required no in-person surveyors, making it potentially transformative for routine, large-scale monitoring of healthcare quality in resource-limited settings. The platform and code are openly available for adaptation.
Zafar, W.; Tavares, S.; Hu, Y.; Brubaker, L.; Green, J.; Mehta, S.; Grams, M. E.; Chang, A. R.
Show abstract
BackgroundAlbuminuria is associated with increased risk of cardiovascular disease (CVD), heart failure, and progression of chronic kidney disease (CKD). Early detection of albuminuria, done through spot urine albumin creatinine ratio (UACR) testing, enables more accurate risk stratification and timely use of preventative therapies. It remains unacceptably low in the hypertension population. MethodsWe evaluated two EHR-embedded clinical decision support (CDS) strategies at Geisinger Health System in order to increase UACR testing in individuals with hypertension: an OurPractice Advisory (OPA) from Jan 2022 to Aug 2022; and a Health Maintenance Topic (HMT) in the Care Gaps section of Storyboard from Aug 2022 that continues to date. We evaluated UACR rates from 2020 to 2023 in Geisinger primary care and compared to a control group of healthcare systems in the Optum Labs Data Warehouse [OLDW]. Patients were excluded if they had UACR testing in the preceding 3 years, had diabetes or CKD, or were receiving palliative/hospice care. ResultsWe included 58,876 individuals in Geisinger (mean age 59.4 years, 49.6% female) and 1,427,754 in OLDW (61.0 years, 49% female). UACR testing in Geisinger (2.97% in 2020; 2.8% in 2021; 9.7% in 2022; 17.5% in 2023) showed significant increase compared to the control health systems (2.08%, 2.26%, 3.35% and 3.40% respectively). Results were consistent after adjusting for age, sex and race. ConclusionOPA increased UACR testing [~]3-fold whereas the HMT was associated with further improvements ([~]6-fold vs. baseline) among those with hypertension, suggesting an important role for CDS design in closing care gaps.
Fink, A.; Burzer, F.; Sacalean, V.; Rau, S.; Kaestingschaefer, K. F.; Rau, A.; Koettgen, A.; Bamberg, F.; Jaenigen, B.; Russe, M. F.
Show abstract
BackgroundKidney volumetry derived from CT has been proposed as a surrogate of renal function in living kidney donor evaluation. However, clinical integration has been limited by reader-dependent workflows and semiautomatic methods susceptible to image quality. PurposeTo evaluate whether fully automated CT-based segmentation of renal cortex, medulla and total parenchymal volume provides reproducible volumetric biomarkers associated with global and split renal function in living kidney donor candidates. Materials and MethodsIn this retrospective single-center study, 461 living kidney donor candidates (2003-2021) underwent contrast-enhanced abdominal CT. A convolutional neural network was trained to automatically segment cortical, medullary, and total parenchymal volumes on arterial-phase images. Segmentation performance was evaluated against manual reference annotations. Volumes were indexed to body surface area. Associations with eGFR, 24-hour creatinine clearance, cystatin C, and tubular clearance were assessed using Spearman correlation coefficient ({rho}), and side-specific volume fractions were compared with scintigraphy -derived split function. ResultsAutomated segmentation achieved excellent agreement with expert reference segmentations (Dice 0.95 for cortex; 0.90 for medulla). eGFR correlated moderately with cortical ({rho} = 0.46) and total parenchymal volume ({rho} = 0.45), and modestly with medullary volume ({rho} = 0.30). Similar associations were observed for other global measures, with the strongest correlation for cortical volume and tubular clearance ({rho} = 0.53). Side-specific volume fractions correlated with scintigraphy-derived split renal function ({rho} = 0.49-0.56; all p < 0.001). ConclusionAutomated CT-based renal subcompartment segmentation provides reproducible volumetric biomarkers within routine donor evaluation. Cortical volume performs comparably to total parenchymal volume and tracks split renal function at the cohort level, suggesting potential utility in donor assessment.
Chen, K.; Tian, X.; Ding, Y.; Dong, Z.; Tao, R.; Fan, Y.; Chen, Z.; Zha, B.; Li, X.; Li, W.
Show abstract
ObjectivePost-thrombotic syndrome (PTS), a common complication of deep vein thrombosis, lacks objective diagnostic biomarkers and its molecular mechanisms remain poorly understood. This study aimed to identify plasma biomarkers and clarify pathways using integrated multi-omics and machine learning. MethodsProteomic and metabolomic profiling of 75 PTS patients and 75 controls was performed. Differential expression analysis, pathway enrichment, and protein-metabolite network analysis were conducted. A multi-algorithm machine learning with 8 feature selection methods prioritized biomarkers. Validations and 14 models were assessed. Results1,104 proteins and 1,891 metabolites were differentially expressed. Citrate cycle and unsaturated fatty acid biosynthesis were enriched. Three proteins, namely DIP2B, KNG1, and SUCLG2, were consistently selected as core biomarkers. All of these proteins were significantly downregulated in PTS and externally validated. A random forest model utilizing these proteins achieved an accuracy of 97.7% in independent testing, with SUCLG2 being the most influential predictor. ConclusionThis study identifies a novel three - protein biomarker panel for the diagnosis of PTS and reveals an immunometabolic axis in the pathogenesis of PTS, which links inflammatory regulation with mitochondrial energy metabolism. These findings provide valuable insights into the development of diagnostic tools and targeted therapeutic approaches.