Transplantation
○ Ovid Technologies (Wolters Kluwer Health)
Preprints posted in the last 90 days, ranked by how well they match Transplantation's content profile, based on 11 papers previously published here. The average preprint has a 0.05% match score for this journal, so anything above that is already an above-average fit.
Dedinska, I.; Dadhania, D. M.; Li, C.; Hauser, N.; Lamba, P.; Lee, J. R.; Muthukumar, T.; Suthanthiran, M.
Show abstract
The long-term impact of SARS-CoV-2 infection on kidney allograft survival remains incompletely understood, particularly regarding the influence of vaccination, acute kidney injury (AKI), and post-infection immunosuppression. We conducted a retrospective analysis of 129 kidney transplant recipients with confirmed SARS-CoV-2 infection between March 2020 and March 2022 with a median follow-up of 50 months. Among 129 recipients, 106 (82%) received vaccination at any time before or after SARS-CoV-2 infection (82%) while 23 (18%) remained unvaccinated. Unvaccinated patients experienced significantly lower long-term graft survival (52% vs. 85%; p = 0.0004) and patient survival (83% vs. 99%; p = 0.0003) compared with vaccinated recipients. AKI occurred in 15% of recipients and independently predicted graft failure (aHR 2.88; p = 0.0341). Post-SARS-CoV-2 serum creatinine and albuminuria were strong prognostic markers of graft loss. Unvaccinated status independently predicted graft failure in both transplantation-anchored (aHR 2.80; p = 0.0342) and SARS-CoV-2-anchored models (aHR 5.31; p = 0.0004). Continuation of mycophenolate mofetil at post-infection assessment was associated with reduced graft-failure risk (aHR 0.99; p = 0.0193). These findings underscore the importance of sustained vaccination in preserving long-term allograft function.
Vranken, A.; Coemans, M.; Bemelman, F. J.; Chauveau, B.; Debyser, T.; Florquin, S.; Koshy, P.; Kuypers, D.; Masset, C.; Pagliazzi, A.; Vanhoutte, T.; Wellekens, K.; Vaulet, T.; Kers, J.; de Vries, A. P. J.; Meziyerh, S.; Verbeke, G.; Naesens, M.
Show abstract
BackgroundThe effects of Banff histological diagnoses on kidney transplant outcome have been well characterized. However, repeated observation of such histological injury across multiple biopsies in kidney transplant recipients remains insufficiently explored. MethodsIn an observational cohort (N=1819 transplantations with 5736 post-transplant biopsies, recurrent event survival models quantified transitions between diagnoses of T-cell mediated rejection (TCMR), antibody-mediated rejection (AMR), DSA-negative C4d-negative microvascular inflammation (MVIDSA-/C4d-), BK polyomavirus nephropathy (BKPyVAN), borderline TCMR (bTCMR), and probable AMR (pAMR), revealing patterns in the disease trajectories. In two observational cohorts (N=1818 transplantations with 5732 biopsies, N=853 transplantations with 975 biopsies), time-dependent cumulative covariates were constructed for TCMR, AMR, MVIDSA-/C4d- and BKPyVAN, enabling estimation of associations of repeated diagnoses with graft failure using multivariable cause-specific Cox models. ResultsThe incidence rate of a diagnosis was most strongly associated with earlier diagnosis of the same type, but associations between different types of diagnoses also occurred. The hazard of kidney graft failure was significantly increased by repeated observation of TCMR in multiple biopsies (HR 7.97, 95% CI 4.94 - 12.86), as well as by repeated AMR (HR 6.19, 95% CI 3.15 - 12.17), repeated MVIDSA-/C4d- (HR 4.53, 95% CI 2.15-9.54) and repeated BKPyVAN (HR 10.90, 95% CI 5.83 - 20.35). The hazard of graft failure was increased more after repeated diagnoses in transplants than after first diagnoses. The effects of repeated TCMR and repeated AMR remained significant even when observed in protocol biopsies in the absence of graft dysfunction. Repeated observation of BKPyVAN was the most detrimental of all diagnoses when observed in indication biopsies, but it was the least harmful when observed in protocol biopsies. ConclusionIncidence of Banff histological diagnoses appears to be affected by earlier diagnoses, especially those of the same type. These repeated observations of a specific diagnosis have an additional effect on the hazard of graft failure, underscoring a critical unmet need for adequate treatment strategies for these recurrent or persistent injury processes. Lay summaryIn two observational cohorts of 1819 and 750 kidney transplant recipients, kidney transplant biopsies were taken at multiple time points after transplantation. Based on the Banff classification for transplant pathology, various post-transplant diseases were diagnosed, often at more than one time point during follow-up. We assessed patterns in the occurrence of diagnoses over time, and related these diagnoses to survival of the kidney grafts using survival models with time-dependent cumulative diagnoses. We found that repeated observation of the same diagnosis was much more common than consecutive observations of different diagnoses. Repeated diagnoses of tissue injury also decreased kidney graft survival more compared to single diagnoses. This indicates that treatment options for patients with repeated or persistent diagnoses are currently inadequate and novel strategies are needed.
Navez, M.; Dos Santos Barata, E.; Maes, N.; Levtchenko, E.; Oliveira Arcolino, F.; Burdeyron, P.; Steichen, C.; Detry, O.; Gilbo, N.; Jouret, F.
Show abstract
The integration of regenerative medicine into dynamic organ preservation may mitigate ischemia-reperfusion injury in kidney transplantation. This systematic review and meta-analysis evaluated the therapeutic potential of stem cell-based interventions during machine perfusion. Following PRISMA guidelines, PubMed, Embase, and Scopus were searched for experimental studies using stem cells or extracellular vesicles (EVs) during hypothermic or normothermic machine perfusion in animal or discarded human kidneys. Outcomes included renal function, injury biomarkers, inflammation, and histology. Nine studies were included, seven in meta-analysis. Despite heterogeneity in models and protocols, several reported reductions in inflammatory cytokines (e.g., IL-6, IL-1{beta}) and biomarkers (e.g., NGAL) following stem cell or EVs administration. However, meta-analysis showed no significant effects on creatinine clearance (Standardized Means (SMD): 0.00; 95% CI: -0.54 to 0.55), urine output (SMD: 0.54; 95% CI: -0.46 to 1.55), or NGAL (SMD: -1.68; 95% CI: -5.60 to 2.25). Histological protection varied, and stem cell retention was limited. Only one study assessed post-transplant function. While stem cell therapies during perfusion may have immunomodulatory and cytoprotective effects, consistent functional benefits were not observed. Further standardized studies, including transplant models and long-term outcomes, are needed to clarify therapeutic potential and optimize delivery strategies.
Jambon, F.; Di Primo, C.; Dromer, C.; Demant, X.; Roux, A.; Le Pavec, J.; Brugiere, O.; Bunel, V.; Guillemain, R.; Goret, J.; Duclaut, M.; Cargou, M.; Ralazamahaleo, M.; Wojciechowski, E.; Guidicelli, G.; Hulot, V.; Devriese, M.; Taupin, J.-L.; Visentin, J.
Show abstract
BackgroundIn lung transplantation, de novo immunodominant donor-specific anti-HLA antibodies recognizing HLA-DQ antigens (dn-iDSA-DQ) are predominant and can induce chronic lung allograft dysfunction (CLAD). We previously developed a method to measure the active concentration of dn-iDSA-DQ. We aimed to determine whether this new quantitative biomarker is associated with transplantation outcomes. MethodsThis retrospective multicentre cohort study included 90 lung transplant recipients (LTRs) developing dn-iDSA-DQ, evidenced through single antigen flow beads (SAFB) follow-up. We measured the active concentration of dn-iDSA-DQ at the time of their first detection (T0) for all LTRs, and within the 2 years after DSA detection, whenever possible. SAFB dn-iDSA-DQ characteristics and clinical data were retrieved up to 5 years after DSA detection. ResultsWe tested 184 sera with SPR (n=90 at T0, n=94 within the 2 years after DSA detection), among which 63 (34.4%) had a quantifiable concentration of the dn-iDSA-DQ ([≥]0.3 nM). The median SAFB mean fluorescence intensity (MFI) of the dn-iDSA-DQ with a concentration [≥]0.3 nM was higher (p<0.0001), yet the correlation between SAFB MFI and active concentration was low (r=0.758, p<0.0001). In multivariate analysis, a concentration of the dn-iDSA-DQ [≥]0.3 nM at T0 was independently associated with a lower 2-year CLAD-free survival (HR 2.06, p=0.02). A concentration of the dn-iDSA-DQ [≥]0.3 nM within the 2 years from DSA detection was associated with a lower graft survival in univariate analysis. ConclusionsActive concentration of dn-iDSA-DQ appears as a valuable biomarker to identify pathogenic DSA at their first detection because of its association with CLAD.
Hullin, R.; Pitta Gros, B.; Rocca, A.; Laptseva, N.; Martinelli, M. V.; Flammer, A. J.; Lu, H.; Meyer, P.; Leuenberger, N.; Mueller, M.
Show abstract
BackgroundIron metabolism disorder is highly prevalent before and after heart transplantation (HTx). The impact of pretransplant and posttransplant iron disorder on posttransplant outcomes is unclear. ObjectivePretransplant serum levels of key regulator proteins of iron metabolism (hepcidin, interleukin-6, erythroferrone) were tested for prediction of the composite outcome 1-year posttransplant all-cause mortality (ACM) or [≥]moderate acute cellular rejection (ACR). Furthermore, serum levels of these proteins were measured at 1-year posttransplant to explore their posttransplant course and association with ACR. ResultsIn a multicenter cohort including 276 consecutive HTx recipients, patients with or without outcome (n=118/158, respectively) did not differ for pretransplant demographics, mismatch of donor/recipient sex, mismatch of HLA epitopes, and hepcidin or interleukin-6 levels. However, pretransplant erythroferrone levels were higher (1.40 vs. 1.19 ng/mL; p=0.013) and hemoglobin levels were lower (124.5 vs. 127 g/L; p=0.004) among patients with the composite outcome. Pretransplant erythroferrone levels >2.25 ng/ml (4th-quartile) were significantly associated with the composite outcome in multivariable analysis (OR 2.17; 95% CI 1.19-3.94, p=0.011; reference: 1st-3rd quartiles). In adjusted predicted proportions analysis, the incidence of the composite outcome was higher in 4th-quartile patients when compared to 1-3rd -quartiles patients (58.0 vs. 37.7%; p=0.003). At 1-year posttransplant, 80.4% of patients with pretransplant erythroferrone levels >2.25 ng/ml remained high; 88.4% of patients with pretransplant erythroferrone levels [≤]2.25 ng/ml had high levels posttransplant. In 1-year survivors with high erythroferrone levels and [≥]moderate ACR during the first postoperative year, the ratio of the opponent regulators of hepcidin gene expression, erythroferrone to interleukin-6, was higher when compared to those without ACR (1.18 vs. 0.41; p=0.016). Hepcidin levels were not different between these two subgroups indicating disproportionate ERFE increase. ConclusionHigh pretransplant erythroferrone levels predict the composite posttransplant outcome 1-year ACM and ACR. Disproportionately high posttransplant erythroferrone levels are related with [≥]moderate acute cellular rejection.
Ctortecka, C.; Jaishankar, D.; Su, P.; Huang, C.-F.; Pla, I.; Henning, N.; Hollas, M. A. R.; Callegari, M. A.; Taylor, M. E.; Lee, Y. M.; Daud, A.; Pinelli, D. F.; Rohan, V.; Caldwell, M. A.; Forte, E.; Sanchez, A.; Kelleher, N. L.; Nadig, S. N.
Show abstract
Kidney transplantation faces a critical paradox: while thousands await organs, approximately 30% of potential deceased donor kidneys are discarded for various reasons, including subjective assessments due to the lack of an objective molecular biomarker of preservation quality. Here, we applied novel "top-down" proteoform imaging mass spectrometry across living donor (LD), deceased donor (brain death or cardiac death), and discarded human kidneys to quantify proteoforms correlating with post-transplant kidney function. This approach preserves post-translational modifications and splice variants, revealing molecular tissue variability beyond protein presence. LD kidneys displayed robust metabolic signatures, including L-xylulose reductase and cytochrome oxidase subunits, whereas deceased donor and discarded organs showed elevated cellular stress markers such as alpha-B-crystallin and peroxiredoxin 1. Post-transplant blood proteoform analysis validated tissue findings, demonstrating persistent cellular stress and immune activation in deceased donor recipients compared with physiologic wound healing in LD recipients. Consistent with these molecular predictions, serum creatinine levels were highest in DCD, intermediate in DBD, and lowest in LD recipients. The intersection of tissue proteoform signatures across all marginal tissues identified four proteoforms consistently elevated in deceased and discarded kidneys: ACTG1, acetylated CRYAB, PARK7, and S100A4. Collectively, these proteoforms capture key molecular indicators of graft quality, reflecting oxidative stress, cellular injury, and immune activation pathways. As such, they represent promising point-of-care (POC) biomarker candidates for objective kidney classification, potentially improving donor kidney utilization. Translational statementCurrent methods for evaluating donor kidney quality rely on subjective assessments, contributing to the discard of approximately 30% of potentially viable organs. This study demonstrates that "top-down" proteomics can objectively identify molecular signatures distinguishing high-quality from marginal donor kidneys. Top-down proteomics analyzes intact proteins with their post-translational modifications or cleavage products, termed proteoforms to provide mechanistic insights into graft quality. We identified four proteoforms (ACTG1, acetylated CRYAB, PARK7, and S100A4) to be consistently elevated in deceased and discarded kidneys, reflecting oxidative stress, cellular injury, and immune activation. These molecular markers correlated with post-transplant kidney outcome, as measured by serum creatinine levels and recipient blood proteoforms. As a next step, validation in larger cohorts could establish these proteoforms as point-of-care biomarkers for real-time donor kidney assessment during procurement. This objective molecular stratification could reduce unnecessary organ discards and improve transplant outcomes by matching organ quality with recipient risk profiles.
Velez-Bermudez, M.; Leyva, Y.; Puttarajappa, C.; Kalaria, A.; Zhu, Y.; Ng, Y.-H.; Unruh, M.; Boulware, L. E.; Tevar, A.; Dew, M. A.; Myaskovsky, L.
Show abstract
Background In the United States, streamlining the kidney transplantation (KT) evaluation process may reduce disparities and barriers to KT access. Prior work showed that the Kidney Transplant Fast Track (KTFT) program shortened this process and reduced racial disparities in waitlisting and overall KT. However, within a setting where evaluation-related structural barriers have been addressed, a comprehensive longitudinal evaluation incorporating sociocultural factors (e.g., medical mistrust, healthcare-related discrimination/racism) alongside race/ethnicity as prespecified predictors across multiple KT milestones, including KT type (living [LDKT] and deceased donor KT [DDKT]), has not been performed. MethodsIn this secondary analysis, data came from the KTFT study, a prospective KT candidate cohort. Participants were recruited before KT evaluation start (05/2015-06/2018), coinciding with baseline measure collection, then followed via medical record through 08/2022. We used hierarchically-adjusted Fine-Gray proportional hazards models in this exploratory analysis. ResultsAmong 1108 KT candidates (243 Black, 783 White, 82 Other), medical mistrust was associated with lower cumulative incidence of waitlisting, but no other sociocultural factors were associated with outcomes. Racial and ethnic differences emerged for KT type: Black participants had a greater cumulative incidence of DDKT, and participants categorized as Other race/ethnicity had a lower cumulative incidence of LDKT, relative to White participants. Conclusions Although KTFT reduced racial/ethnic disparities in waitlisting and overall KT receipt, we identified racial/ethnic differences in LDKT and DDKT. Medical mistrust was a significant barrier to waitlisting. Findings suggest that even when the KT evaluation process is streamlined, sociocultural factors and race/ethnicity may influence KT outcomes.
Ekanayake, C.; Husain, S. A.; Yu, M. E.; Adler, J.; Muppavarapu, C. S.; Schold, J.; Mohan, S.
Show abstract
Allocation out of sequence (AOOS) allows organ procurement organizations (OPOs) to bypass the standard match-run to expedite kidney placement and prevent nonuse. Inclusion of all AOOS attempts is vital when attempting to assess impact of AOOS on organ utility, including those attempts that do not lead to successful transplant. We assessed the frequency of AOOS documentation in discarded kidneys. Using Scientific Registry of Transplant Recipients (SRTR) Potential Transplant Recipient (PTR) offer data from 2021-2024, we identified match-runs with at least one discarded kidney. AOOS was defined according to Health Resources and Services Administration (HRSA) guidelines and match runs were stratified by kidney recovery and disposition patterns, focusing on 2024 when AOOS was well established. AOOS coding frequency was assessed within each group and across OPOs. In 2024, only 4.3% of all match-runs with at least one discarded kidney contained evidence of AOOS documentation. Across OPOs, AOOS-coded discards ranged from 0.0% to 17.1% (median 3.9%, IQR [2.7-7.6%]). AOOS documentation among discarded kidneys remains rare and inconsistent, suggesting major data-capture deficiencies when attempting to accurately assess AOOS efforts. Improved AOOS reporting is essential before future expedited allocation pathways can be effectively evaluated or implemented.
Ueland, K. M.; Elahi, T.; Rasmussen, M.; Wolfe, A.; Purcell, H.; Chakka, S. R.; Mirimo-Martinez, M.; Persinger, H.; Johnson, K.; Boynton, A.; McMillen, K.; Byelykh, M.; Biernacki, M.; Yeh, A.; Ali, N.; Manjappa, S.; Wuliji, N.; Fredricks, D.; Bleakley, M.; Holmberg, L.; Schenk, J.; Raftery, D.; Ma, J. A.; Hill, G.; Neuhouser, M. L.; Lee, S.; Markey, K. A.
Show abstract
Plant-based dietary strategies may offer a tractable approach to mitigating microbiome disruption and improving outcomes in patients undergoing autologous hematopoietic cell transplantation (auto-HCT) for multiple myeloma, a population in whom intestinal dysbiosis has been linked to infectious complications and inferior survival. We conducted a single-arm study to test the feasibility and biological activity of a high-fiber, plant-based, whole-food meal delivery intervention during the peri-transplant period. Adults with multiple myeloma (n = 22) received fully prepared, plant-based meals for 5 weeks spanning conditioning, neutropenia, and early recovery, with the goal of supporting consumption of nutrient-dense, high-fiber foods despite transplant-related symptoms that often limit oral intake. The primary endpoints were feasibility and tolerability, defined by successful enrollment, adherence to study procedures, and patient-reported intake of study meals; diet was quantified using prospective food diaries and 24-hour dietary recall surveys. Secondary endpoints included changes in gut microbiome composition and function assessed by shotgun metagenomic sequencing and stool short-chain fatty acid (SCFA) measurements. The intervention was feasible and generally well tolerated, with all participants consuming at least some proportion of delivered meals and with adherence sufficient to support planned dietary and correlative analyses. Greater intake of study meals was associated with more pronounced shifts in gut microbial communities, including enrichment of SCFA-producing taxa and compositional changes consistent with a fiber-responsive microbiome. Stool SCFA concentrations increased from baseline to the end of the intervention, suggesting a functional impact of the dietary strategy on microbial metabolite production during the peri-transplant period. These findings demonstrate that a plant-based meal delivery intervention is implementable during auto-HCT and suggest dose-dependent modulation of the gut microbiome and its metabolic output. Larger randomized trials are warranted to determine whether microbiome-targeted nutrition can reduce transplant-related toxicities, enhance immune recovery, and improve disease control in multiple myeloma. The trial is registered at ClinicalTrials.gov (NCT06559709).
Massie, A.; Yan, L.; Xue, R.; Stewart, D. E.; Husain, S. A.; Levan, M. L.; Gentry, S.; Lonze, B. E.; Segev, D.
Show abstract
A substantial proportion of recovered deceased-donor (DD) kidneys go unused. Accumulated refusals by transplant centers during the offer process may signal nonuse risk, and quantifying this phenomenon could inform frameworks for rescue strategies or out-of-sequence (OOS) placement. Using OPTN data on adult DD kidneys offered for transplant in 2024, we empirically estimated the probability of nonuse as a function of accumulated refusal count (ARC). Kidneys transplanted OOS were excluded from analysis. Among recovered adult DD kidneys offered in-sequence, risk of nonuse exceeded 50% after ARC=6 for blood type O kidneys, ARC=4 for type A and type B, and after ARC=1 for type AB. Risk exceeded 80% after ARC=128 (type O), ARC=55 (type A), ARC=50 (type B), and ARC=14 (type AB), and exceeded 90% after 980, 414, 278, and 41 refusals, respectively. The C-statistic of the ARC by blood type ranged from 0.896 to 0.933. ARC thresholds offer a pragmatic trigger for rescue allocation, incorporating center perception of kidney quality not easily captured in standard metrics. A policy allowing OPOs to offer kidneys OOS or deploy alternative rescue strategies once a certain ARC threshold is reached may improve utilization of hard-to-place donor kidneys while keeping easier-to-place kidneys in-sequence.
Lopez-Lopez, V.; Lucas-Ruiz, F.; Maina, C.; Anton-Garcia, A. I.; Llado, L.; Vila-Tura, M.; Serrano, T.; Lopez-Andujar, R.; Catalayud, D.; Perez-Rojas, J.; Lopez-Baena, J. A.; Peligros, I.; Sabater-Orti, L.; Mora-Oliver, I.; Alfaro-Cervello, C.; Pacheco, D.; Asensio-Diaz, E.; Madrigal-Rubiales, B.; Dopazo, C.; Gomez-Gavara, C.; Salcedo-Allende, M. T.; Gomez-Bravo, M. A.; Bernal-Bellido, C.; Borrero-Martin, J. J.; Serrablo, A.; Serrablo, L.; Horndler, C.; Blanco-Fernandez, G.; Jaen-Torrejimeno, I.; Diaz-Delgado, M.; Eshmuminov, D.; Hernandez-Kakauridze, S.; Vidal-Correoso, D.; Martinez-Caceres,
Show abstract
Background & AimsPerihilar cholangiocarcinoma is an aggressive malignancy with clinical heterogeneity and poor long-term outcomes after resection. Current prognostic assessment relies mainly on anatomical staging and pathological features, which incompletely capture the entire postoperative risk. We aimed to determine whether integrative analysis of clinical, surgical, pathological and tumor genomic data could improve time-resolved, individualized recurrence-risk prediction after curative-intent resection. MethodsWe performed a multicenter retrospective study including patients undergoing curative-intent resection for perihilar cholangiocarcinoma in ten Spanish hospitals (2003-2023). Overall and disease-free survival were analyzed using Cox models. Outcome-agnostic clinical phenotypes were derived by unsupervised clustering of clinical and surgical features. Targeted tumor sequencing of cancer-associated hotspot regions and selected genes was performed. Prognostic models integrating clinical and genomic data were trained and evaluated in independent training/test sets using penalized and latent-component Cox frameworks, with time dependent discrimination. ResultsThe final cohort comprised 142 patients, with a median follow-up of 26.4 months. Recurrence occurred in 61.3% of patients, and 53.5% died during follow-up. Classical pathological factors were strongly associated with survival and recurrence. Unsupervised outcome-agnostic clustering identified three reproducible clinical phenotypes with markedly different recurrence patterns and survival, only partially explained by anatomical staging. Integrative clinical-genomic modelling further improved recurrence-risk prediction, achieving high discrimination in independent validation (time-dependent AUC [~]0.8). Moreover, the integrative model assigned higher risk over time to patients who relapsed. Patients combining unfavorable clinical phenotype with high genomic-derived risk exhibited a high probability of early recurrence. ConclusionsIntegrated clinical phenotyping and targeted genomic profiling substantially refine recurrence-risk stratification after resection of perihilar cholangiocarcinoma beyond anatomical staging alone. This provides a pragmatic framework for risk-adapted postoperative surveillance and therapeutic decision-making. Impact and ImplicationsThis study provides a data-driven framework integrating clinical, surgical and targeted genomic information to refine prognostic stratification after resection of perihilar cholangiocarcinoma, addressing the limitations of anatomy-based staging in capturing biological heterogeneity. The results are particularly relevant for clinicians managing postoperative surveillance and adjuvant strategies, as they identify patient subgroups with markedly different risks of early recurrence despite similar conventional staging. In practical terms, the combination of unsupervised clinical phenotyping and a targeted, biologically informed genomic panel could support risk-adapted follow-up intensity, selection for adjuvant or experimental therapies, and enrolment into clinical trials. While prospective validation is required before routine implementation, this approach offers a feasible and interpretable pathway toward precision postoperative management in a highly aggressive malignancy.
Paverd, H.; Gao, Z.; Mahani, G.; Fabre, M.; Burge, S.; Hoare, M.; Crispin-Ortuzar, M.
Show abstract
Background & AimsLiver cancer primarily develops in patients with chronic liver disease (CLD), yet most cases are diagnosed at an advanced stage with poor prognosis. While clinical surveillance of patients with CLD generates extensive longitudinal data, its unstructured free-text nature hinders large-scale research. To unlock this real-world evidence, we developed a scalable framework using open-source Large Language Models (LLMs) to transform unstructured clinical text into structured data. MethodsWe conducted a multi-stage evaluation of LLM-based extraction from multi-source clinical documentation of liver transplant recipients. A calibration set comprising 507 reports (414 radiology, 65 pathology, and 28 liver transplant assessment reports) from 30 patients was manually annotated to benchmark four open-source LLMs (Llama 3.1 8B, Llama 3.3 70B, Open-BioLLM 70B, DeepSeek R1 8B) against a regular expression baseline across 73 tasks. To ensure structured outputs, we compared constrained decoding (Guidance and Ollama packages) against unconstrained prompting across 5,590 prompt-output pairs. The finalised pipeline was then applied to the full cohort of 835 patients transplanted in our centre over the past decade. ResultsAmong the models tested, Llama 3.3 70B performed best, exceeding 90% accuracy on 59/73 tasks, outperforming both a medically fine-tuned model (OpenBioLLM 70B) and a smaller variant (Llama 3.1 8B). Constrained decoding achieved >99.9% format adherence, far surpassing unconstrained prompting (87.4%). Applied to the full cohort, the pipeline successfully analysed 22,493 reports to generate 37,125 datapoints (45 variables, 835 patients) without manual annotation. Further analysis confirmed known liver cancer risk factors (male sex, viral hepatitis, smoking, diabetes), and allowed for reconstruction of longitudinal disease timelines. ConclusionsThis work provides a scalable blueprint for transforming real-world clinical free-text into structured formats, paving the way for accelerated, data-driven research into complex pre-cancerous diseases like CLD.
de Oliveira Andrade, L. J.; Parana, R.; Matos de Oliveira, G. C.; Vinhaes Bittencourt, A. M.; de Mattos Salles, O. J.; Matos de Oliveira, L.
Show abstract
IntroductionTacrolimus remains central to liver transplantation, yet its narrow therapeutic index and pharmacokinetic variability are associated with increased risk of post-transplant diabetes mellitus (PTDM). While polymorphisms in metabolizing enzymes modulate drug exposure and diabetogenic risk, these relationships have not been systematically integrated through targeted pharmacogenomic approaches. ObjectiveTo systematically evaluate genetic variants in tacrolimus-metabolizing genes and their associations with PTDM through integrated in silico pharmacogenomic analysis. MethodsAn in silico analysis was performed, integrating data from public repositories (PharmGKB), curated literature, and functional annotations of genetic variants. Machine learning models were developed using synthetic data generated from literature-derived effect sizes to demonstrate proof-of-concept feasibility. We prioritized genes (CYP3A5, CYP3A4, ABCB1) based on PharmGKB evidence levels, functional impact, and clinical associations with tacrolimus exposure and PTDM risk, incorporating genotype information, drug dosing, and metabolic outcomes. ResultsThe CYP3A5*1 allele emerged as a key determinant, consistently requiring 1.5- to 2.8-fold higher tacrolimus doses and conferring a significantly elevated risk of PTDM compared to non-expressers, an effect mediated by cumulative drug exposure. In the systematic review and synthetic modeling, carriers of functional CYP3A5 alleles expresser genotypes exhibited a significantly increased PTDM risk relative to non-expressers, demonstrating a clear dose-exposure-toxicity relationship. In contrast, CYP3A4 and ABCB1 showed only suggestive but heterogeneous, evidence of association. ConclusionThis in silico pharmacogenomic study demonstrates a clinically significant association between genetic variability in tacrolimus metabolism and the development of PTDM following liver transplantation. These findings support genotype-guided strategies to optimize immunosuppressive therapy and advance precision medicine in transplant care.
Yu, J.
Show abstract
Vaccination frequently elicits suboptimal immunogenicity in organ transplant recipients, particularly those on long-term immunosuppressive therapy, highlighting the need for improved understanding of immunosuppression mechanisms and optimized vaccination strategies. This study enrolled a cohort of 132 individuals and observed significantly lower antibody levels in kidney transplant recipients (KTRs) compared to non-transplant controls (non-KTRs). Antibody levels were inversely associated with both the dosage and duration of immunosuppressive therapy. Complementary small animal studies demonstrated that immunosuppressive treatment dosage-dependently and reversibly impaired antibody production, primarily by depleting immune cells, notably B cells. A single shot of adenoviral vector-based vaccines demonstrated enhanced immunogenicity relative to two shots of alum-adjuvanted protein vaccines, inducing potent neutralizing antibodies (NAbs) and a Th1-biased T-cell response even under continuous immunosuppression. The enhanced response was driven by reduced interference from pre-existing antibodies, sustained transgene expression, and the reprogramming of lipid metabolism to activate T and B cells. Our findings advocate for tailored vaccination strategies, positioning adenoviral vectors as a candidate modality for this vulnerable population.
Fink, A.; Burzer, F.; Sacalean, V.; Rau, S.; Kaestingschaefer, K. F.; Rau, A.; Koettgen, A.; Bamberg, F.; Jaenigen, B.; Russe, M. F.
Show abstract
BackgroundKidney volumetry derived from CT has been proposed as a surrogate of renal function in living kidney donor evaluation. However, clinical integration has been limited by reader-dependent workflows and semiautomatic methods susceptible to image quality. PurposeTo evaluate whether fully automated CT-based segmentation of renal cortex, medulla and total parenchymal volume provides reproducible volumetric biomarkers associated with global and split renal function in living kidney donor candidates. Materials and MethodsIn this retrospective single-center study, 461 living kidney donor candidates (2003-2021) underwent contrast-enhanced abdominal CT. A convolutional neural network was trained to automatically segment cortical, medullary, and total parenchymal volumes on arterial-phase images. Segmentation performance was evaluated against manual reference annotations. Volumes were indexed to body surface area. Associations with eGFR, 24-hour creatinine clearance, cystatin C, and tubular clearance were assessed using Spearman correlation coefficient ({rho}), and side-specific volume fractions were compared with scintigraphy -derived split function. ResultsAutomated segmentation achieved excellent agreement with expert reference segmentations (Dice 0.95 for cortex; 0.90 for medulla). eGFR correlated moderately with cortical ({rho} = 0.46) and total parenchymal volume ({rho} = 0.45), and modestly with medullary volume ({rho} = 0.30). Similar associations were observed for other global measures, with the strongest correlation for cortical volume and tubular clearance ({rho} = 0.53). Side-specific volume fractions correlated with scintigraphy-derived split renal function ({rho} = 0.49-0.56; all p < 0.001). ConclusionAutomated CT-based renal subcompartment segmentation provides reproducible volumetric biomarkers within routine donor evaluation. Cortical volume performs comparably to total parenchymal volume and tracks split renal function at the cohort level, suggesting potential utility in donor assessment.
Pollo, B. A. L. V.; Ching, D.; Idolor, M. I.; King, R. A.; Climacosa, F. M.; Caoili, S. E.
Show abstract
BackgroundThere is a need for synthetic peptide-based serologic assays that exploit avidity to replace whole antigens while enabling low-cost diagnostics in resource-limited settings. ObjectiveTo evaluate the diagnostic accuracy of a polymeric peptide-based ELISA leveraging avidity to enhance signal. MethodA 15-member SARS-CoV-2 peptide library corresponding to multiple epitope clusters and proteins was screened by indirect ELISA using pooled sera from RT-PCR-confirmed COVID-19 patients to identify peptides with possible diagnostic utility. The identified lead candidate, S559, possessed terminal cysteine-substitution to allow disulfide polymerization, and the resulting avidity gain was evaluated by comparing the apparent dissociation constant (KDapp) before and after depolymerization with N-acetylcysteine. The performance of an optimized ELISA using S559 was evaluated on 1,222 prospectively collected COVID-19 serum samples and 218 biobanked pre-COVID control serum samples. ResultsPolymeric S559 with a KDapp of 29.26 nM-1was demonstrated to have a 218% avidity gain relative to the completely depolymerized form. At pre-defined thresholds, the optimized S559 ELISA has a sensitivity and specificity of 83.39% (95%CI: 81.18% and 85.43%) and 96.79% (95%CI: 93.50% and 98.70%), respectively. At post hoc thresholds determined by Youden index, sensitivity and specificity reached 95.01 (95% CI: 93.63% - 96.16%) and 100.00% (95% CI: 98.32% - 100.00%), respectively. ConclusionHomomultivalent epitope presentation using polymeric S559 allows a highly specific immunoassay using human sera that may have important value in detecting antibodies, whether for diagnosing infection, confirming vaccination status or conducting surveillance.
Bronder, S.; Urschel, R.; Reinhardt, F.; Mihm, J.; Schlienger, E.; Schmidt, T.; Brueckner, S.; Sester, U.; Sester, M.
Show abstract
Annual immunisation against both COVID-19 and seasonal influenza is now becoming standard of care, particularly ahead of anticipated winter waves. These vaccines may be co-administered on the same day or sequentially on separate days. Data on immunogenicity and the impact of consecutive vaccinations on spike-specific humoral and cellular immunity in dialysis patients remain limited. In this real-world observational study, SARS-CoV-2-specific immune responses were evaluated in dialysis patients receiving the monovalent XBB.1.5 vaccine followed by a quadrivalent influenza vaccine 14 days later, or either vaccine alone. Specific antibodies and T-cells were quantified and characterized using enzyme-linked immunosorbent assay and flow-cytometry. Baseline analyses from a reference-group prior to the vaccination season showed that most patients had detectable SARS-CoV-2- and influenza-specific immunity. Both the XBB.1.5 and the influenza vaccine substantially enhanced pre-existing antigen-specific humoral and cellular responses. When comparing XBB.1.5-vaccinated patients with and without subsequent influenza-vaccination, the magnitude of XBB.1.5-induced antibody or T-cell responses did not differ. Likewise, the influenza-vaccine had no non-specific effect on SARS-CoV-2-specific immune responses. Finally, spike-specific immunity remained stable over a six-month period and persisted at levels exceeding those of unvaccinated patients assessed during the same period. In conclusion, sequential administration of COVID-19 and influenza vaccines in dialysis patients is feasible and does not compromise the immunogenicity of either vaccine. Our data are encouraging in the context of ongoing development of additional mRNA-based vaccines that may require administration in close temporal proximity to seasonal influenza immunisation, and underscore the benefit of booster vaccination in individuals with impaired immune function.
Marchand, S.; Trochel, A.; Loirat, M.; Mignon, J.; Letellier, T.; Braud, M.; Delbos, L.; Fourgeux, C.; Taouli, S.; Peltier, C.; Gautreau-Rolland, L.; Poschmann, J.; Blancho, G.; Saulquin, X.; Bressollette-Bodin, C.; McILROY, D.
Show abstract
BK polyomavirus (BKPyV) is a major complication in kidney transplant recipients (KTR), for whom no specific antiviral therapy is available. Modulation of immunosuppressive therapy results in virus clearance in most KTR with BKPyV DNAemia (controllers), but a significant minority fail to clear the virus (non-controllers). Here, we adapt LIBRA-seq, which links antibody sequence data to antigen specificity, to intact viral capsids of the four BKPyV genotypes to study and compare BKPyV-specific B-cell repertoires in controllers (n=8) versus non-controllers (n=3). Sequences were obtained for 5197 BKPyV-specific antibodies, and predicted antigen specificities were validated by ELISA and neutralizing assays (n=21 antibodies). We show that cross-genotype reactivity results from the recruitment of numerous broadly cross-reactive B-cell clones with preferential binding to the infecting genotype, making up 4,3% to 44,6% of the BKPyV-specific repertoire, while true broadly neutralizing antibodies are rare. The proportions of broadly-specific and isotype switched antibodies, rates of somatic hypermutation and repertoire diversity were comparable in both patient groups, indicating that there is no identifiable deficit in the humoral response mounted by BKPyV non-controllers, and supporting the notion that humoral immunity alone is insufficient to control established BKPyV replication. This work shows that LIBRA-seq can be successfully applied to a non-enveloped virus and provides a framework for analyzing antiviral B-cell repertoires and antibody specificity in clinically relevant settings. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=180 SRC="FIGDIR/small/26345220v1_ufig1.gif" ALT="Figure 1"> View larger version (44K): org.highwire.dtl.DTLVardef@18ef2b6org.highwire.dtl.DTLVardef@1e0b7a2org.highwire.dtl.DTLVardef@3822fcorg.highwire.dtl.DTLVardef@180deea_HPS_FORMAT_FIGEXP M_FIG C_FIG
Pande, A.; Adaniya, S.; Clark, W.; Wilkinson, R.; Grazziutti, M.; Apewokin, S.
Show abstract
BackgroundAntibiotic stewardship during stem cell transplantation (SCT) is challenging.. Procalcitonin (PCT) has been employed successfully in critical care patients to safely guide stewardship. However, procalcitonin guided stewardship has not been robustly assessed in SCT recipients. We sought to evaluate the potential utility of PCT to guide antimicrobial de-escalation during engraftment. Methods100 SCT patients were prospectively enrolled in a "strategy trial" and had infectious complications documented. Lab parameters - CBC, BMP, CRP were obtained daily as standard of care (SOC) while PCT was obtained for research purposes. Providers were blinded to PCT results. We compared duration of antimicrobial escalation between actual events (SOC model) and a proposed PCT model. In this hypothetical PCT model, antibiotic de-escalation would occur if CRP remained <100 mg/dl and PCT <0.25 ng/ml after 3 days of escalation. Escalation events were defined as a substitution or addition of an antimicrobial agent after initiation of prophylactic antimicrobials. Results77 patients had escalation events and of these, 33 had bacterial infections. A total of 136 antimicrobial escalations events were identified, and of these only 39(28.7%) were associated with documented infections. The standard of care model had a mean duration (+SD) of 9.08 (+ 6.08) antibiotic days. If the PCT model were employed, the mean duration (+SD) would be 4.44 (+ 6.16) days (p<0.001). The PCT model, however, would have missed 11 infections\ ConclusionProcalcitonin-guided antimicrobial stewardship during autologous stem cell transplantation is feasible however optimization is necessitated for utilization as a tool to guide antibiotic prophylaxis during SCT.
Mitchell, S. T.; Spyker, D.; Robbins, G.; Rumack, B.
Show abstract
Amatoxin-induced acute liver failure complicates misidentified foraged mushroom ingestion worldwide; abrupt multisystem collapse punctuates apparent improvement. Our prospective single-arm clinical trial investigated proactive toxicokinetic-based management to preserve elimination capacity: sustained enhanced hydration to maintain renal clearance; fasting plus octreotide to suppress meal-driven enterohepatic circulation; and intravenous silibinin to inhibit OATP1B3-mediated hepatic uptake, enabling safe passage and elimination of gallbladder-confined amatoxin-laden bile. Safety population (N=99) transplant-free recovery (TFR): 88.0% (87 recoveries, 6 transplants, 6 deaths). Protocol-adherent Efficacy population (n=86) TFR: 98.8% (85 recoveries, 1 transplant, 0 deaths). Multivariable analysis identified uninterrupted hydration as strongest TFR predictor (P<0.001), followed by earlier silibinin initiation (P=0.003); octreotide shortened INR recovery by 11 hours (P=0.033). These findings support a toxin elimination model in which preserved renal clearance and biliary sequestration are central recovery determinants. The kinetic balance between renal clearance and hepatic uptake governs both recovery and collapse.