Transplantation
○ Ovid Technologies (Wolters Kluwer Health)
Preprints posted in the last 90 days, ranked by how well they match Transplantation's content profile, based on 13 papers previously published here. The average preprint has a 0.03% match score for this journal, so anything above that is already an above-average fit.
Lovinfosse, P.; Bouquegneau, A.; Massart, A.; Pipeleers, L.; Bonvoisin, C.; Carp, L.; Everaert, H.; Jadoul, A.; Dendooven, A.; Geers, C.; Grosch, S.; Erpicum, P.; Hellemans, R.; Seidel, L.; Weekers, L.; Hustinx, R.; Jouret, F.
Show abstract
BackgroundSubclinical kidney allograft acute rejection (SCR) corresponds to "the unexpected histological evidence of acute rejection in a stable patient". The diagnosis of SCR relies on surveillance biopsy. Positron emission tomography (PET/CT) after injection of F18-fluorodeoxyglucose ([18F]FDG) has been proposed as a non-invasive screening approach. In the present multicenter prospective study, we assess the diagnostic yield [18F]FDGPET/CT to rule out SCR in stable KTR at 3 months post KTx. MethodsFrom 01/2021 to 03/2025, we prospectively combined surveillance biopsy and [18F]FDGPET/CT at [~]3 months post transplantation in adult kidney transplant recipients from 4 independent imaging centers. The mean standardized uptake value (mSUV) was measured in kidney cortex and referenced as a ratio to psoas muscle mSUV (mSUVR). ResultsOur multicentric 185-patient cohort was categorized upon Banff-2022: normal (n=158); borderline (n=18); SCR (n=9, including 6 T-cell-mediated rejection and 3 microvascular inflammation). No significant correlation was observed between the mSUVR and ti score (R=0.032, p-value=0.67). The mSUVR reached 2.33 [1.97-2.93], 2.71 [2.50-3.33] and 2.42 [2.27-3.14] in normal, borderline and SCR groups, respectively. In multivariate models stratified by center, the risk of non-normal histology (n=27, including borderline and SCR) increased with donor age (OR=1.05 [1.01-1.1], p=0.02) but not with the mSUVR (OR=4.11 [0.91-18.48], p=0.07). The risk of biopsy-proven SCR (n=9) was not significantly associated with mSUVR. ConclusionsThe mSUVR of [18F]FDG PET/CT does not reliably rule out SCR on surveillance biopsy.
Vranken, A.; Coemans, M.; Bemelman, F. J.; Chauveau, B.; Debyser, T.; Florquin, S.; Koshy, P.; Kuypers, D.; Masset, C.; Pagliazzi, A.; Vanhoutte, T.; Wellekens, K.; Vaulet, T.; Kers, J.; de Vries, A. P. J.; Meziyerh, S.; Verbeke, G.; Naesens, M.
Show abstract
BackgroundThe effects of Banff histological diagnoses on kidney transplant outcome have been well characterized. However, repeated observation of such histological injury across multiple biopsies in kidney transplant recipients remains insufficiently explored. MethodsIn an observational cohort (N=1819 transplantations with 5736 post-transplant biopsies, recurrent event survival models quantified transitions between diagnoses of T-cell mediated rejection (TCMR), antibody-mediated rejection (AMR), DSA-negative C4d-negative microvascular inflammation (MVIDSA-/C4d-), BK polyomavirus nephropathy (BKPyVAN), borderline TCMR (bTCMR), and probable AMR (pAMR), revealing patterns in the disease trajectories. In two observational cohorts (N=1818 transplantations with 5732 biopsies, N=853 transplantations with 975 biopsies), time-dependent cumulative covariates were constructed for TCMR, AMR, MVIDSA-/C4d- and BKPyVAN, enabling estimation of associations of repeated diagnoses with graft failure using multivariable cause-specific Cox models. ResultsThe incidence rate of a diagnosis was most strongly associated with earlier diagnosis of the same type, but associations between different types of diagnoses also occurred. The hazard of kidney graft failure was significantly increased by repeated observation of TCMR in multiple biopsies (HR 7.97, 95% CI 4.94 - 12.86), as well as by repeated AMR (HR 6.19, 95% CI 3.15 - 12.17), repeated MVIDSA-/C4d- (HR 4.53, 95% CI 2.15-9.54) and repeated BKPyVAN (HR 10.90, 95% CI 5.83 - 20.35). The hazard of graft failure was increased more after repeated diagnoses in transplants than after first diagnoses. The effects of repeated TCMR and repeated AMR remained significant even when observed in protocol biopsies in the absence of graft dysfunction. Repeated observation of BKPyVAN was the most detrimental of all diagnoses when observed in indication biopsies, but it was the least harmful when observed in protocol biopsies. ConclusionIncidence of Banff histological diagnoses appears to be affected by earlier diagnoses, especially those of the same type. These repeated observations of a specific diagnosis have an additional effect on the hazard of graft failure, underscoring a critical unmet need for adequate treatment strategies for these recurrent or persistent injury processes. Lay summaryIn two observational cohorts of 1819 and 750 kidney transplant recipients, kidney transplant biopsies were taken at multiple time points after transplantation. Based on the Banff classification for transplant pathology, various post-transplant diseases were diagnosed, often at more than one time point during follow-up. We assessed patterns in the occurrence of diagnoses over time, and related these diagnoses to survival of the kidney grafts using survival models with time-dependent cumulative diagnoses. We found that repeated observation of the same diagnosis was much more common than consecutive observations of different diagnoses. Repeated diagnoses of tissue injury also decreased kidney graft survival more compared to single diagnoses. This indicates that treatment options for patients with repeated or persistent diagnoses are currently inadequate and novel strategies are needed.
Riley, J.; Roberts, B.; Rashid, B.; Barton, L.; Wellberry Smith, M.; Sutcliffe, R.; Billingham, E.; Pettit, S.; Jones, G.; Fisher, A. J.; Parmar, J.; Lim, S.; Ravanan, R.; Manas, D.
Show abstract
Background: Each year around 4,500 people in the UK receive an organ transplant. These surgeries can be life changing and life extending for patients but are also associated with significant costs for the health service. However, by reducing the need for other expensive interventions involved in non-transplant care for organ failure, such as dialysis, some of these costs may be recovered. Methods: We assessed the lifetime costs and benefits associated with transplantation focussing on deceased donor adult transplants for kidneys, livers, hearts, and lungs. We incorporated costs of organ retrieval, surgery, post-transplant secondary care, and medications, as well as impacts on quality and duration of life. These were compared to the cost of managing patients with end-stage organ failure for whom no transplant occurs. Results: Kidney transplants were found to be cost saving with lifetime costs approximately 220,000 GBP lower than alternative treatment. Heart transplants and liver transplants were found to be more cost-effective than thresholds used by NICE for new medicines at approximately 17,000 GBP to 18,000 GBP per quality adjusted life year gained. Lung transplants were the least cost-effective organ transplant with a cost per quality adjusted life year gained of over 50,000 GBP. Conclusions: Although transplants can be costly, not providing a transplant to a patient who needs one also brings significant costs. Kidney transplants can save the health system money by reducing the need for dialysis. Increasing the number of kidney's available for transplant could save the NHS money whilst saving and improving lives.
Ctortecka, C.; Jaishankar, D.; Su, P.; Huang, C.-F.; Pla, I.; Henning, N.; Hollas, M. A. R.; Callegari, M. A.; Taylor, M. E.; Lee, Y. M.; Daud, A.; Pinelli, D. F.; Rohan, V.; Caldwell, M. A.; Forte, E.; Sanchez, A.; Kelleher, N. L.; Nadig, S. N.
Show abstract
Kidney transplantation faces a critical paradox: while thousands await organs, approximately 30% of potential deceased donor kidneys are discarded for various reasons, including subjective assessments due to the lack of an objective molecular biomarker of preservation quality. Here, we applied novel "top-down" proteoform imaging mass spectrometry across living donor (LD), deceased donor (brain death or cardiac death), and discarded human kidneys to quantify proteoforms correlating with post-transplant kidney function. This approach preserves post-translational modifications and splice variants, revealing molecular tissue variability beyond protein presence. LD kidneys displayed robust metabolic signatures, including L-xylulose reductase and cytochrome oxidase subunits, whereas deceased donor and discarded organs showed elevated cellular stress markers such as alpha-B-crystallin and peroxiredoxin 1. Post-transplant blood proteoform analysis validated tissue findings, demonstrating persistent cellular stress and immune activation in deceased donor recipients compared with physiologic wound healing in LD recipients. Consistent with these molecular predictions, serum creatinine levels were highest in DCD, intermediate in DBD, and lowest in LD recipients. The intersection of tissue proteoform signatures across all marginal tissues identified four proteoforms consistently elevated in deceased and discarded kidneys: ACTG1, acetylated CRYAB, PARK7, and S100A4. Collectively, these proteoforms capture key molecular indicators of graft quality, reflecting oxidative stress, cellular injury, and immune activation pathways. As such, they represent promising point-of-care (POC) biomarker candidates for objective kidney classification, potentially improving donor kidney utilization. Translational statementCurrent methods for evaluating donor kidney quality rely on subjective assessments, contributing to the discard of approximately 30% of potentially viable organs. This study demonstrates that "top-down" proteomics can objectively identify molecular signatures distinguishing high-quality from marginal donor kidneys. Top-down proteomics analyzes intact proteins with their post-translational modifications or cleavage products, termed proteoforms to provide mechanistic insights into graft quality. We identified four proteoforms (ACTG1, acetylated CRYAB, PARK7, and S100A4) to be consistently elevated in deceased and discarded kidneys, reflecting oxidative stress, cellular injury, and immune activation. These molecular markers correlated with post-transplant kidney outcome, as measured by serum creatinine levels and recipient blood proteoforms. As a next step, validation in larger cohorts could establish these proteoforms as point-of-care biomarkers for real-time donor kidney assessment during procurement. This objective molecular stratification could reduce unnecessary organ discards and improve transplant outcomes by matching organ quality with recipient risk profiles.
Jambon, F.; Di Primo, C.; Dromer, C.; Demant, X.; Roux, A.; Le Pavec, J.; Brugiere, O.; Bunel, V.; Guillemain, R.; Goret, J.; Duclaut, M.; Cargou, M.; Ralazamahaleo, M.; Wojciechowski, E.; Guidicelli, G.; Hulot, V.; Devriese, M.; Taupin, J.-L.; Visentin, J.
Show abstract
BackgroundIn lung transplantation, de novo immunodominant donor-specific anti-HLA antibodies recognizing HLA-DQ antigens (dn-iDSA-DQ) are predominant and can induce chronic lung allograft dysfunction (CLAD). We previously developed a method to measure the active concentration of dn-iDSA-DQ. We aimed to determine whether this new quantitative biomarker is associated with transplantation outcomes. MethodsThis retrospective multicentre cohort study included 90 lung transplant recipients (LTRs) developing dn-iDSA-DQ, evidenced through single antigen flow beads (SAFB) follow-up. We measured the active concentration of dn-iDSA-DQ at the time of their first detection (T0) for all LTRs, and within the 2 years after DSA detection, whenever possible. SAFB dn-iDSA-DQ characteristics and clinical data were retrieved up to 5 years after DSA detection. ResultsWe tested 184 sera with SPR (n=90 at T0, n=94 within the 2 years after DSA detection), among which 63 (34.4%) had a quantifiable concentration of the dn-iDSA-DQ ([≥]0.3 nM). The median SAFB mean fluorescence intensity (MFI) of the dn-iDSA-DQ with a concentration [≥]0.3 nM was higher (p<0.0001), yet the correlation between SAFB MFI and active concentration was low (r=0.758, p<0.0001). In multivariate analysis, a concentration of the dn-iDSA-DQ [≥]0.3 nM at T0 was independently associated with a lower 2-year CLAD-free survival (HR 2.06, p=0.02). A concentration of the dn-iDSA-DQ [≥]0.3 nM within the 2 years from DSA detection was associated with a lower graft survival in univariate analysis. ConclusionsActive concentration of dn-iDSA-DQ appears as a valuable biomarker to identify pathogenic DSA at their first detection because of its association with CLAD.
Hullin, R.; Pitta Gros, B.; Rocca, A.; Laptseva, N.; Martinelli, M. V.; Flammer, A. J.; Lu, H.; Meyer, P.; Leuenberger, N.; Mueller, M.
Show abstract
BackgroundIron metabolism disorder is highly prevalent before and after heart transplantation (HTx). The impact of pretransplant and posttransplant iron disorder on posttransplant outcomes is unclear. ObjectivePretransplant serum levels of key regulator proteins of iron metabolism (hepcidin, interleukin-6, erythroferrone) were tested for prediction of the composite outcome 1-year posttransplant all-cause mortality (ACM) or [≥]moderate acute cellular rejection (ACR). Furthermore, serum levels of these proteins were measured at 1-year posttransplant to explore their posttransplant course and association with ACR. ResultsIn a multicenter cohort including 276 consecutive HTx recipients, patients with or without outcome (n=118/158, respectively) did not differ for pretransplant demographics, mismatch of donor/recipient sex, mismatch of HLA epitopes, and hepcidin or interleukin-6 levels. However, pretransplant erythroferrone levels were higher (1.40 vs. 1.19 ng/mL; p=0.013) and hemoglobin levels were lower (124.5 vs. 127 g/L; p=0.004) among patients with the composite outcome. Pretransplant erythroferrone levels >2.25 ng/ml (4th-quartile) were significantly associated with the composite outcome in multivariable analysis (OR 2.17; 95% CI 1.19-3.94, p=0.011; reference: 1st-3rd quartiles). In adjusted predicted proportions analysis, the incidence of the composite outcome was higher in 4th-quartile patients when compared to 1-3rd -quartiles patients (58.0 vs. 37.7%; p=0.003). At 1-year posttransplant, 80.4% of patients with pretransplant erythroferrone levels >2.25 ng/ml remained high; 88.4% of patients with pretransplant erythroferrone levels [≤]2.25 ng/ml had high levels posttransplant. In 1-year survivors with high erythroferrone levels and [≥]moderate ACR during the first postoperative year, the ratio of the opponent regulators of hepcidin gene expression, erythroferrone to interleukin-6, was higher when compared to those without ACR (1.18 vs. 0.41; p=0.016). Hepcidin levels were not different between these two subgroups indicating disproportionate ERFE increase. ConclusionHigh pretransplant erythroferrone levels predict the composite posttransplant outcome 1-year ACM and ACR. Disproportionately high posttransplant erythroferrone levels are related with [≥]moderate acute cellular rejection.
Neely, M.; Wojdyla, D. M.; Hong, H.; Wang, P.; Anderson, M. R.; Arroyo, K.; Belperio, J.; Benvenuto, L.; Budev, M.; Combs, M.; Dhillon, G.; Hsu, J. Y.; Kalman, L.; Martinu, T.; McDyer, J.; Oyster, M.; Pandya, K.; Reynolds, J. M.; Rim, J. G.; Roe, D. W.; Shah, P. D.; Singer, J. P.; Singer, L.; Snyder, L. P.; Tsuang, W.; Weigt, S. S.; Christie, J. D.; Palmer, S. M.; Todd, J.
Show abstract
Background: We aimed to identify data-driven FEV1 trajectory phenotypes post-chronic lung allograft dysfunction (CLAD), relate these phenotypes to patient factors and future graft loss, and develop a classification approach for prospective patients. Methods: We studied adult first lung recipients with probable CLAD from two prospective multicenter cohorts: CTOT-20 (n=206) and LTOG (n=1418). FEV1 trajectories over the first nine months post-CLAD were characterized using joint latent class mixed models, jointly modelling time-to-graft loss to account for informative censoring. Models were fit independently in both cohorts and also only among LTOG bilateral recipients. A classification and regression tree (CART) model was derived in LTOG bilateral recipients and applied to CTOT-20 bilateral recipients. Findings: Four distinct early FEV1 trajectory classes were identified in CTOT-20, with large differences in nine month graft loss (72.3%, 31.1%, 2.2%, 0%). In LTOG, similar trajectory patterns were reproduced, with an additional class demonstrating early post-CLAD FEV1 improvement. Among bilateral recipients, trajectory classes showed a clear risk gradient, including a high-risk class with 100% graft loss and a low-risk class with no early graft loss. A CART model incorporating clinical and spirometric variables demonstrated good discrimination in LTOG bilateral recipients (multiclass AUC 0.85) and consistent class assignment and trajectory patterns when applied to CTOT-20. Interpretation: We identified reproducible, clinically meaningful early post-CLAD FEV1 trajectory phenotypes with differential graft loss risk. These phenotypes and a pragmatic classification tool may support risk stratification, trial enrichment, and improved prognostication for patients and clinicians.
Maruyama, Y.; Okada, D.; Tsuda, H.; Kish, D. D.; Keslar, K. S.; Dvorina, N.; Baldwin, W. M.; Fairchild, R. L.
Show abstract
Acute antibody-mediated rejection (aABMR) is an important cause of clinical kidney graft injury and failure. Transcripts associated with NK cell activation in graft biopsies are diagnostic of aABMR, but mechanisms underlying NK cell activation during ABMR remain poorly understood. In contrast to the long-term (> 60 days) survival of complete MHC-mismatched kidney allografts in wild type C57BL/6 mice, B6.CCR5-/- recipients develop high titers of donor-specific antibody (DSA) with allograft rejection between days 18 to 28 post-transplant. This has allowed investigation of mechanisms underlying NK cell activation within kidney allografts during aABMR. DSA titers first became detectable in B6.CCR5-/- (H-2b) recipients of A/J (H-2a) kidney allografts at day 8 and peaked on day 15 post-transplant and was accompanied by a parallel increase in mRNA levels of Rae-1e, a ligand for the NK cell activation receptor NKG2D. A/J kidneys in B6.CCR5-/-NKG2D-/- recipients and A/J.Rae-1e-/- kidneys in B6.CCR5-/- recipients survived >60 days, despite high serum DSA levels. Flow cytometric analysis of allograft infiltrating cells in B6.CCR5-/- recipients on day 15 post-transplant revealed inflammatory monocyte and NK cell infiltration and NK cell activation to proliferate and express CD107a, a marker of cytotoxic function. These features of aABMR were absent or markedly reduced by recipient NKG2D- or donor graft Rae-1e-deficiency. These findings suggest that interference with expression of allograft Rae-1e or recipient NK cell NKG2D abrogates aABMR despite persistently high DSA levels and that aABMR requires coordination between infiltrating NK cell and inflammatory monocyte activation within the kidney allograft.
Sharifi, I.; Tewksbury, E.; Wadsworth, M.; Goldberg, D. S.
Show abstract
ImportanceDonor hospitals in the United States are assigned to a designated organ procurement organization (OPO) responsible for managing deceased donors in the designated donation service area (DSA). Donor hospitals can apply for waivers to work with a different OPO with appropriate justification, and beginning with the 2026 OPO certification cycle, the highest-performing OPOs can bid to work with donor hospitals managed by intermediate- and low-performing OPOs. ObjectiveWe sought to evaluate the impact of donor hospital waivers on organ donation activity. DesignRetrospective cohort study. SettingWe evaluated Organ Procurement and Transplantation Network (OPTN) data from two OPOs (Donor Network West and Honor Bridge), each with a donor hospital (Renown Regional Medical Center and North Carolina Baptist Hospital) in its DSA granted a waiver to work with a different OPO beginning in April 2025. Main OutcomeWe assessed changes in the number of organ donors and organs transplanted pre- and post-granting of a waiver using a difference-in-differences approach based on multilevel mixed-effects models. ResultsAfter switching OPO affiliations, these two donor hospitals had marked and statistically significant increases in the number of donors recovered and organs transplanted, despite stable numbers of reported deaths at each hospital. In multivariable models, switching OPO affiliations was associated with a statistically significant increase in donors recovered and organs transplanted. Conclusion: With eight months of post-waiver data, donor hospitals with granted waivers had significant increases in donation activity driven by improved donor conversion rather than changes in referral patterns or organ yield per donor. Although longer-term data are needed to confirm these findings, CMS and the organ transplant community should feel confident that changing donor hospital-OPO affiliations will not negatively impact donation and may lead to significant increases in donation. These data also counter unfounded concerns that the continued granting of waivers and realignments of donor hospital-OPO affiliations during the 2026 recertification cycle will lead to a collapse of the system of organ donation. KEY POINTSO_ST_ABSQuestionC_ST_ABSDo donor hospitals who request a waiver to change OPO affiliations have changes in organ donation rates? FindingsUsing a difference-in-difference approach, the two donor hospitals who changed OPO affiliations had a significant increase in organ donors and organs transplanted after being granted a waiver. MeaningDonor hospitals that change OPO affiliations have an immediate increase in organ donation activity.
Monserrate-Marrero, J.; Castro-Medina, M.; Feingold, B.; Giraldo-Grueso, M.; Rose-Felker, K.; Tang, R.; Kobayashi, K.; Diaz-Castrillon, C. E.; McIntyre, K.; Da Silva, L.; Da Silva, J. P.; Morell, V.; Seese, L.
Show abstract
Background: Primary graft dysfunction (PGD) remains one of the leading causes of early mortality after pediatric heart transplant (HT). While neurodevelopmental impacts of congenital heart disease (CHD) are well-characterized, the effect of PGD on long-term neurodevelopmental outcomes in pediatric HT recipients remains unknown. We sought to determine the association between PGD and neurodevelopmental outcomes in this population. Methods: We performed a retrospective cohort study using the United Network for Organ Sharing (UNOS) database. All pediatric (age <18 years) isolated heart transplant recipients from 2010-2025 were included. The most recent pre- and post-transplant neurodevelopmental outcomes including cognitive delay, motor development, academic progress, and function status (stratified by age) were compared between PGD (n=434) and non- PGD groups (n=6956). Results: PGD patients had significantly worse pre-transplant functional status and motor development. Post-transplant, PGD was associated with worse motor development (18.8% vs. 13.0% definite motor delay; p=0.01) and functional status in younger children (39.5% vs. 57.8% able to keep up with peers; p<0.001). Post-transplant stroke occurred 3.5 times more frequently in PGD patients (11.5% vs. 3.3%; p<0.001). Cognitive development (p=0.94) and academic progress (p=0.096) did not differ significantly. Thirty-day (7.8% vs. 1.9%) and 1-year mortality (20.3% vs. 6.4%) were significantly higher in PGD patients (both p<0.001). Conclusions: This is the first study to characterize neurodevelopmental outcomes in pediatric patients undergoing HT with PGD. PGD is associated with significantly worse motor development and functional status independent of pre-transplant baseline. There is a 3.5-fold higher stroke rate providing a plausible neurological mechanism. The findings support targeted developmental surveillance recommendations and early intervention for this high-risk population.
Ueland, K. M.; Elahi, T.; Rasmussen, M.; Wolfe, A.; Purcell, H.; Chakka, S. R.; Mirimo-Martinez, M.; Persinger, H.; Johnson, K.; Boynton, A.; McMillen, K.; Byelykh, M.; Biernacki, M.; Yeh, A.; Ali, N.; Manjappa, S.; Wuliji, N.; Fredricks, D.; Bleakley, M.; Holmberg, L.; Schenk, J.; Raftery, D.; Ma, J. A.; Hill, G.; Neuhouser, M. L.; Lee, S.; Markey, K. A.
Show abstract
Plant-based dietary strategies may offer a tractable approach to mitigating microbiome disruption and improving outcomes in patients undergoing autologous hematopoietic cell transplantation (auto-HCT) for multiple myeloma, a population in whom intestinal dysbiosis has been linked to infectious complications and inferior survival. We conducted a single-arm study to test the feasibility and biological activity of a high-fiber, plant-based, whole-food meal delivery intervention during the peri-transplant period. Adults with multiple myeloma (n = 22) received fully prepared, plant-based meals for 5 weeks spanning conditioning, neutropenia, and early recovery, with the goal of supporting consumption of nutrient-dense, high-fiber foods despite transplant-related symptoms that often limit oral intake. The primary endpoints were feasibility and tolerability, defined by successful enrollment, adherence to study procedures, and patient-reported intake of study meals; diet was quantified using prospective food diaries and 24-hour dietary recall surveys. Secondary endpoints included changes in gut microbiome composition and function assessed by shotgun metagenomic sequencing and stool short-chain fatty acid (SCFA) measurements. The intervention was feasible and generally well tolerated, with all participants consuming at least some proportion of delivered meals and with adherence sufficient to support planned dietary and correlative analyses. Greater intake of study meals was associated with more pronounced shifts in gut microbial communities, including enrichment of SCFA-producing taxa and compositional changes consistent with a fiber-responsive microbiome. Stool SCFA concentrations increased from baseline to the end of the intervention, suggesting a functional impact of the dietary strategy on microbial metabolite production during the peri-transplant period. These findings demonstrate that a plant-based meal delivery intervention is implementable during auto-HCT and suggest dose-dependent modulation of the gut microbiome and its metabolic output. Larger randomized trials are warranted to determine whether microbiome-targeted nutrition can reduce transplant-related toxicities, enhance immune recovery, and improve disease control in multiple myeloma. The trial is registered at ClinicalTrials.gov (NCT06559709).
Singh, S.; Patel, S. K.; Matsuura, R.; Velazquez, D.; Sun, Z.; Noel, S.; Rabb, H.; Fan, J.
Show abstract
Background: Kidney transplantation is the preferred treatment strategy for end-stage kidney disease. Deceased donor kidneys usually undergo cold storage until kidney transplantation, leading to cold ischemia injury that may contribute to poor graft outcomes. However, the molecular characterization of potential mechanisms of cold ischemia injury remains incomplete. Results: To bridge this knowledge gap, we leveraged the 10x Visium spatial transcriptomic technology to perform full transcriptome profiling of murine kidneys subject to varying durations of cold ischemia typical in a deceased donor kidney transplant setting. We developed a computational workflow to identify and compare spatiotemporal transcriptomic changes that accompany the injury pathophysiology in a tissue compartment-specific manner. We identified proportional enrichment of oxidative phosphorylation (OXPHOS) genes with increasing duration of cold ischemia injury within the oxygen-lean inner medulla region, suggestive of atypical metabolic presentation. This was distinct in cold ischemia injury tissue compared to warm ischemia-reperfusion kidney injury tissue. Spatiotemporal trends were validated by qPCR and immunofluorescence in a larger cohort of mice. We provide an interactive online browser at https://jef.works/CellCarto-ColdIschemia/ to facilitate exploration of our results by the broader scientific and clinical community. Conclusions: Altogether, our spatiotemporal transcriptomic analysis identified coordinated molecular changes within metabolic pathways such as OXPHOS deep within the cold ischemic kidney, highlighting the need for increased attention to the inner medulla and potential opportunities for new insights beyond those available from superficial biopsy-focused tissue examinations.
Marchand, S.; Trochel, A.; Loirat, M.; Mignon, J.; Letellier, T.; Braud, M.; Delbos, L.; Fourgeux, C.; Taouli, S.; Peltier, C.; Gautreau-Rolland, L.; Poschmann, J.; Blancho, G.; Saulquin, X.; Bressollette-Bodin, C.; McILROY, D.
Show abstract
BK polyomavirus (BKPyV) is a major complication in kidney transplant recipients (KTR), for whom no specific antiviral therapy is available. Modulation of immunosuppressive therapy results in virus clearance in most KTR with BKPyV DNAemia (controllers), but a significant minority fail to clear the virus (non-controllers). Here, we adapt LIBRA-seq, which links antibody sequence data to antigen specificity, to intact viral capsids of the four BKPyV genotypes to study and compare BKPyV-specific B-cell repertoires in controllers (n=8) versus non-controllers (n=3). Sequences were obtained for 5197 BKPyV-specific antibodies, and predicted antigen specificities were validated by ELISA and neutralizing assays (n=21 antibodies). We show that cross-genotype reactivity results from the recruitment of numerous broadly cross-reactive B-cell clones with preferential binding to the infecting genotype, making up 4,3% to 44,6% of the BKPyV-specific repertoire, while true broadly neutralizing antibodies are rare. The proportions of broadly-specific and isotype switched antibodies, rates of somatic hypermutation and repertoire diversity were comparable in both patient groups, indicating that there is no identifiable deficit in the humoral response mounted by BKPyV non-controllers, and supporting the notion that humoral immunity alone is insufficient to control established BKPyV replication. This work shows that LIBRA-seq can be successfully applied to a non-enveloped virus and provides a framework for analyzing antiviral B-cell repertoires and antibody specificity in clinically relevant settings. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=180 SRC="FIGDIR/small/26345220v1_ufig1.gif" ALT="Figure 1"> View larger version (44K): org.highwire.dtl.DTLVardef@18ef2b6org.highwire.dtl.DTLVardef@1e0b7a2org.highwire.dtl.DTLVardef@3822fcorg.highwire.dtl.DTLVardef@180deea_HPS_FORMAT_FIGEXP M_FIG C_FIG
Schwarz, A.; Eismann, T.; Zheng, T.; Holzinger, S.; Denk, A.; Goeldel, S.; Urban, M.; Goettert, S.; Pourjam, M.; Lagkouvardos, I.; Neuhaus, K.; Herhaus, P.; Verbeek, M.; Gerner, R. R.; Fante, M.; Hiergeist, A.; Gessner, A.; Edinger, M.; Herr, W.; Kleigrewe, K.; Heidegger, S.; Janssen, K.-P.; Holler, E.; Meedt, E.; Schirmer, M.; Bassermann, F.; Wolff, D.; Poeck, H.; Weber, D.; Thiele Orberg, E.
Show abstract
The intestinal microbiome influences immune recovery and long-term outcomes after allogeneic hematopoietic stem cell transplantation (allo-SCT). While reduced bacterial diversity and depletion of immunomodulatory microbial metabolites during peri-engraftment have been linked to acute graft-versus-host disease (aGvHD) and mortality, it remains unclear whether microbiome recovery after engraftment and immune reconstitution is better reflected by bacterial diversity or by microbial metabolic output. We aimed to define microbiome recovery in the late post-transplant period and test whether a metabolite-based biomarker improves the prediction of clinical outcomes, including overall survival (OS) and chronic (c) GvHD. In this two-center longitudinal observational study, serial stool samples were collected from pre-transplant baseline to day +100 after allo-SCT in a discovery cohort (n = 20, Technical University Munich University Hospital (TUM)) and an independent validation cohort (n = 100, University Hospital Regensburg (UKR)). Gut microbiome composition was assessed by 16S rRNA gene amplicon sequencing, with metagenomic profiling in selected patients, and stool metabolites were quantified using targeted mass spectrometry. Patients were classified as RECOVERY or NO RECOVERY based on changes in bacterial richness between baseline and the post-transplant period. To capture microbial metabolic output, the previously established Immune-Modulatory Metabolite Risk Index (IMM-RI), comprising butyric, propionic, and isovaleric acids, desaminotyrosine and indole-3-carboxaldehyde, was adapted to the late post-transplant period (IMM-RI post-TX). Bacterial alpha diversity frequently improved by day +100; however, this did not consistently indicate restoration of baseline community structure and was not paralleled by recovery of stool metabolite profiles. Accordingly, RECOVERY status showed a limited association with survival or transplant-related mortality (TRM). In contrast, IMM-RI post-TX low-risk identified patients with preserved butyrate-associated biosynthetic capacity and was significantly associated with improved OS in both cohorts (UKR: HR 0.2052, 95% CI 0.07703 - 0.5466, p < 0.0001). In the validation cohort, IMM-RI post-TX low-risk was significantly associated with reduced relapse-related mortality. Interestingly, stool butyric-, propionic and valeric acid concentrations were increased in cGvHD of the skin, indicating context-dependent metabolite effects. These findings suggest that metabolite profiling outperforms bacterial diversity for predicting outcomes after allo-SCT and support microbial metabolites as promising biomarkers for risk stratification and actionable candidates for precision microbiome interventions after allo-SCT.
Wehrens, S. M.; Arvas, M.; Fustolo-Gunnink, S. F.; Vinkovic Vlah, M.; Waters, A.; Erikstrup, C.; Drechsler, L. O.; Stanworth, S. J.; van den Hurk, K.
Show abstract
iii.Background and ObjectivesThe "Pan-European Transfusion Research InfrAstructure" (PETRA) project was established to advance the use of donor, blood product, and patient datasets in Europe, aiming to benefit both patient and donor health. Here, the initial PETRA objective was to describe the landscape of existing donor and blood establishment (BE) databases. Materials and MethodsAn online survey was circulated to the European Blood Alliances BE members. The survey collected information on the feasibility of accessing donor data, and challenges and possibilities for linking these datasets with information on the associated blood products and transfusion recipients, and donors own health records. ResultsSeventeen BEs across 16 countries completed the survey. The majority could, in principle, link their donor data to product data (13 BEs (76%)) and recipient data (10 BEs (59%)), for research purposes. However, capabilities were limited and in only 29% of the BEs was the donor to recipients linkage an automated process. BEs reported significant challenges to achieve full vein-to-vein linkage, including legal constraints and lack of consent (11 BEs) and resources (10-14 BEs). IT and data issues as well as lack of knowledge and training were cited as obstacles by a minority of BEs. ConclusionWhilst the survey results suggest considerable interest in developing linkages between blood donors, their products, and recipients, many challenges remain due to a variety of obstacles. First steps in working towards a PETRA may be assistance to navigate legal frameworks as well as investing in resources and quality and harmonisation of data collections. iv. HighlightsO_LI17 blood establishments (BEs) in 16 countries responded to a survey on obstacles and opportunities for achieving vein-to-vein datasets. C_LIO_LIIn 59% of the BEs donor-to-recipient links can be established for research improving transfusion outcomes, but only in 29% this is an automated process. C_LIO_LIIn order to work towards a "Pan-European Transfusion Research InfrAstructure" (PETRA), legal frameworks, adequate donor consent and (financial and human) resources are the most common obstacles that require addressing. C_LI
Xia, C.; Lian, M.; Ma, B.; Yu, H.; Zhang, R.; Wen, L.; Wang, X.; Zhao, Y.; Ouyang, Z.; Ye, Y.; Feng, X.; Wu, H.; Lai, L.
Show abstract
Xenotransplantation offers a potential solution to the organ shortage crisis. Multi-gene modification of pigs, such as knockout of three carbohydrate antigen-related genes and expression of immunoprotective proteins, can significantly improve xenograft survival. However, existing strategies face challenges: transposon-based transgenesis may lead to unstable expression, while exogenous promoters used in site-specific integration are susceptible to epigenetic silencing, hindering long-term stable expression. Therefore, developing a donor pig model capable of sustained multi-gene expression is critical. To address this, CRISPR-Cas9 was used to knockout three major glycan antigen genes to eliminate hyperacute rejection. Subsequently, four human protective genes were site-specifically integrated into the porcine Rosa26 safe-harbor locus, with expression driven by the endogenous Rosa26 promoter and the THBD core promoter for long-term stable and tissue-specific expression, and the selection marker was removed using Cre/loxP. Results showed complete absence of the three glycan antigens in BM7G pigs, while the four protective proteins were stably expressed in vascular endothelial cells and major organs. Among them, hCD55 and hCD46 were widely expressed, while hTHBD and hEPCR showed vascular-specific expression. In-vitro assays confirmed that BM7G porcine endothelial cells significantly reduced human antibody binding, effectively inhibited complement-dependent cytotoxicity, and decreased thrombin-antithrombin complex formation. In conclusion, by combining xenoantigen knockout with endogenous promoter-driven expression of multiple human protective genes, a seven-gene modified pig model with low immunogenicity and synergistic protective function was successfully constructed, providing an important donor resource for xenotransplantation preclinical research.
Stuut, A. H. G.; Brazda, P.; Janssen, A.; Vyborova, A.; Karaiskaki, F.; Keramati, F.; de Bont, D. A.; Nicolasen, M. J. T.; Gatti, L.; Hutten, T. J. A.; Yildiz, J.; Spierings, E. T.; Straetemans, T. C. M.; Beringer, D.; Pagliuca, S.; Stunnenberg, H. G.; Sebestyen, Z.; Drylewicz, J.; de Witte, M. A.; Kuball, J. H. E.
Show abstract
Immune reconstitution after allogeneic hematopoietic stem cell transplantation is influenced by graft-composition and viral reactivation, but the combined long-term impact on {beta} and {gamma}{delta}T cells remains unclear. We analyzed a cohort of 213 patients receiving either {beta}T cell-depleted grafts (n=146; graft engineering that removes donor {beta}T cells) or T cell-replete grafts (n=67; containing donor T cells). Longitudinal immune phenotyping was integrated with bulk and single-cell TCR repertoire and transcriptomic profiling. CMV reactivation was associated with expansion of CD8+ {beta}T cells across both transplant types and with numerical dominance of V{delta}2- {gamma}{delta}T cells specifically in {beta}T cell-depleted recipients. V{delta}2- {gamma}{delta}T cells underwent early polyclonal expansion followed by repertoire focusing, independent of CMV, whereas {beta}T cells remained clonally restricted. Reduced early V{delta}2+ {gamma}{delta}TCR diversity was associated with EBV reactivation. Single-cell and TCR tracking analyses revealed long-term persistence of donor-derived V{delta}2+ {gamma}{delta}TCRs, whereas V{delta}1+ {gamma}{delta} and {beta}T cell repertoires were predominantly rebuilt de novo. Despite de novo rebuilding, {beta}TCR repertoire diversity diverged by platform at one year: {beta}T cell-depleted recipients exhibited marked (hyper)expansion of {beta}TCR clonotypes and lower diversity than T cell-replete recipients, indicating a durable imprint of graft engineering on {beta}TCR-clonality. Transcriptomic profiling showed that post-transplant T cells predominantly adopted effector programs, with platform-dependent polarization toward cytotoxic signatures in {beta}T cell-depleted recipients and toward AREG-associated tissue-repair signatures in T cell-replete recipients, consistent with wound-healing functions. In conclusion, transplantation platforms imprint durable clonal and transcriptional remodeling of {beta} and {gamma}{delta}T cells, while viral reactivation primarily amplifies expansion without fundamentally reshaping repertoire architecture. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=113 SRC="FIGDIR/small/704768v1_ufig1.gif" ALT="Figure 1"> View larger version (43K): org.highwire.dtl.DTLVardef@88f136org.highwire.dtl.DTLVardef@944f13org.highwire.dtl.DTLVardef@d3835corg.highwire.dtl.DTLVardef@5548ef_HPS_FORMAT_FIGEXP M_FIG C_FIG
Pagliuca, S.; Mooyaart, J. E.; Ayuk, F.; Zeiser, R.; Potter, V.; Dreger, P.; Bethge, W.; Hilgendorf, I.; Michonneau, D.; Rambaldi, A.; Sengeloev, H.; Passweg, J.; Richardson, D.; Gedde-Dahl, T.; Kinsella, F.; Edinger, M.; Mielke, S.; Eder, M.; Andreani, M.; Crivello, P.; Merli, P.; Hoogenboom, J. D.; de Wreede, L. C.; Chabannon, C.; Kuball, J.; Gurnari, C.; Fleischhauer, K.; Ruggeri, A.; Lenz, T. L.
Show abstract
Allogeneic hematopoietic cell transplantation (allo-HCT) hinges on a delicate trade-off between graft-versus-tumor control and graft-versus-host disease (GvHD), mediated by donor T-cell recognition of antigens presented by recipient human leukocyte antigen (HLA) molecules. We hypothesized that, beyond allele-level matching, sequence divergence at peptide-binding grooves across donor and recipient HLA loci shapes these responses. To this end, we evaluated the effect of HLA evolutionary divergence (HED), a metric quantifying amino acid variability at HLA peptide-binding sites, on selected hematological malignancies in 4,695 patients undergoing allo-HCT from a 9/10 mismatched unrelated donor (MMUD), reported to the EBMT database. We examined (i) locus-specific recipient HED (HED-R) and (ii) "HED-mismatch" (HED-MM), capturing immunopeptidome divergence at the mismatched locus. While dichotomous mismatch status explained differences in survival and acute GvHD risk (with overall greater detriment for class I loci), HED metrics uncovered substantial within-mismatch heterogeneity. In DRB1 mismatched subgroup, HED-MM at this locus, independently predicted inferior relapse-free survival (RFS) with an attenuating time-dependent association, further modulated by cross-locus HED-R. In this subgroup, higher HED-R at HLA-A and HLA-C associated with increased risks of acute GvHD and non-relapse mortality, respectively. Among HLA-B-mismatched pairs, higher DRB1 HED-R associated with worse overall survival (OS) and RFS and higher relapse risk. In the HLA-A-mismatched subgroup, higher HED-R at HLA-A increased chronic GvHD risk. Collectively, HED-derived metrics complement conventional mismatch classification by capturing qualitative differences in donor-recipient immunopeptidome interactions and reveal a complex, non-linear interplay among alleles across mismatch subgroups that modulates the clinical impact of mismatching. KeypointsO_LIIn mismatched unrelated HCT, baseline risk varies across mismatch constellations, with class I mismatches more detrimental than class II. C_LIO_LIHED complements conventional HLA mismatch classification by capturing qualitative donor-recipient immunopeptidome interactions. C_LI
Paverd, H.; Gao, Z.; Mahani, G.; Fabre, M.; Burge, S.; Hoare, M.; Crispin-Ortuzar, M.
Show abstract
Background & AimsLiver cancer primarily develops in patients with chronic liver disease (CLD), yet most cases are diagnosed at an advanced stage with poor prognosis. While clinical surveillance of patients with CLD generates extensive longitudinal data, its unstructured free-text nature hinders large-scale research. To unlock this real-world evidence, we developed a scalable framework using open-source Large Language Models (LLMs) to transform unstructured clinical text into structured data. MethodsWe conducted a multi-stage evaluation of LLM-based extraction from multi-source clinical documentation of liver transplant recipients. A calibration set comprising 507 reports (414 radiology, 65 pathology, and 28 liver transplant assessment reports) from 30 patients was manually annotated to benchmark four open-source LLMs (Llama 3.1 8B, Llama 3.3 70B, Open-BioLLM 70B, DeepSeek R1 8B) against a regular expression baseline across 73 tasks. To ensure structured outputs, we compared constrained decoding (Guidance and Ollama packages) against unconstrained prompting across 5,590 prompt-output pairs. The finalised pipeline was then applied to the full cohort of 835 patients transplanted in our centre over the past decade. ResultsAmong the models tested, Llama 3.3 70B performed best, exceeding 90% accuracy on 59/73 tasks, outperforming both a medically fine-tuned model (OpenBioLLM 70B) and a smaller variant (Llama 3.1 8B). Constrained decoding achieved >99.9% format adherence, far surpassing unconstrained prompting (87.4%). Applied to the full cohort, the pipeline successfully analysed 22,493 reports to generate 37,125 datapoints (45 variables, 835 patients) without manual annotation. Further analysis confirmed known liver cancer risk factors (male sex, viral hepatitis, smoking, diabetes), and allowed for reconstruction of longitudinal disease timelines. ConclusionsThis work provides a scalable blueprint for transforming real-world clinical free-text into structured formats, paving the way for accelerated, data-driven research into complex pre-cancerous diseases like CLD.
Fink, A.; Burzer, F.; Sacalean, V.; Rau, S.; Kaestingschaefer, K. F.; Rau, A.; Koettgen, A.; Bamberg, F.; Jaenigen, B.; Russe, M. F.
Show abstract
BackgroundKidney volumetry derived from CT has been proposed as a surrogate of renal function in living kidney donor evaluation. However, clinical integration has been limited by reader-dependent workflows and semiautomatic methods susceptible to image quality. PurposeTo evaluate whether fully automated CT-based segmentation of renal cortex, medulla and total parenchymal volume provides reproducible volumetric biomarkers associated with global and split renal function in living kidney donor candidates. Materials and MethodsIn this retrospective single-center study, 461 living kidney donor candidates (2003-2021) underwent contrast-enhanced abdominal CT. A convolutional neural network was trained to automatically segment cortical, medullary, and total parenchymal volumes on arterial-phase images. Segmentation performance was evaluated against manual reference annotations. Volumes were indexed to body surface area. Associations with eGFR, 24-hour creatinine clearance, cystatin C, and tubular clearance were assessed using Spearman correlation coefficient ({rho}), and side-specific volume fractions were compared with scintigraphy -derived split function. ResultsAutomated segmentation achieved excellent agreement with expert reference segmentations (Dice 0.95 for cortex; 0.90 for medulla). eGFR correlated moderately with cortical ({rho} = 0.46) and total parenchymal volume ({rho} = 0.45), and modestly with medullary volume ({rho} = 0.30). Similar associations were observed for other global measures, with the strongest correlation for cortical volume and tubular clearance ({rho} = 0.53). Side-specific volume fractions correlated with scintigraphy-derived split renal function ({rho} = 0.49-0.56; all p < 0.001). ConclusionAutomated CT-based renal subcompartment segmentation provides reproducible volumetric biomarkers within routine donor evaluation. Cortical volume performs comparably to total parenchymal volume and tracks split renal function at the cohort level, suggesting potential utility in donor assessment.