Back

MethodsX

Elsevier BV

All preprints, ranked by how well they match MethodsX's content profile, based on 14 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
Data variability in standardised cell culture experiments

Reddin, I.; Fenton, T. R.; Wass, M. N.; Michaelis, M.

2021-02-28 pharmacology and toxicology 10.1101/2021.02.27.433153 medRxiv
Top 0.1%
7.2%
Show abstract

Despite much debate about a perceived reproducibility crisis in the life sciences, it remains unclear what level of replicability is technically possible [1,2]. Here, we analysed the variation among drug response data of the NCI60 project, which for decades has tested anti-cancer agents in a 60-cell line panel following a standardised protocol [3]. In total, 2.8 million compound/cell line experiments are available in the NCI60 resource CellMiner [4]. The largest fold change between the lowest and highest GI50 (concentration that reduces cell viability by 50%) in a compound/cell line combination was 3.16 x 1010. All compound/cell line combinations with >100 experiments displayed maximum GI50 fold changes >5, 99.7% maximum fold changes >10, 87.3% maximum fold changes >100, and 70.5% maximum fold changes >1000. FDA-approved drugs and experimental agents displayed similar variation. The variability remained very high after removal of outliers and among experiments performed in the same month. Hence, our analysis shows that high variability is an intrinsic feature of experimentation in biological systems, even among highly standardised experiments in a world-leading research environment. Thus, a narrow focus on experiment standardisation does not ensure a high level of replicability on its own.

2
Metabolic FRET sensors in intact organs: Applying spectral unmixing to acquire reliable signals

Gandara, L.; Durrieu, L.; Wappner, P.

2023-05-17 developmental biology 10.1101/2023.05.17.541214 medRxiv
Top 0.1%
6.9%
Show abstract

In multicellular organisms, metabolic coordination across multiple tissues and cell types is essential to satisfy regionalized energetic requirements and respond coherently to changing environmental conditions. However, most metabolic assays require the destruction of the biological sample, with a concomitant loss of spatial information. Fluorescent metabolic sensors and probes are among the most user-friendly techniques for collecting metabolic information with spatial resolution. In a previous work, we have adapted to an animal system, Drosophila melanogaster, genetically encoded metabolic FRET-based sensors that had been previously developed in single-cell systems. These sensors provide semi-quantitative data on the stationary concentrations of key metabolites of the bioenergetic metabolism: lactate, pyruvate, and 2-oxoglutarate. The use of these sensors in intact organs required the development of an image processing method that minimizes the contribution of spatially complex autofluorescence patterns, that would obscure the FRET signals. In this article, we describe the fundamentals of intramolecular hetero-FRET, the technology on which these sensors are based. Finally, using data from the lactate sensor expressed in the larval brain as a case study, we show step by step how to process the fluorescence signal to obtain reliable FRET values.

3
Labelling Human Kinematics Data Using Classification Models

Shi, Y.; Chadderwala, N.; Ratan, U.

2022-02-24 rehabilitation medicine and physical therapy 10.1101/2022.02.18.22271206 medRxiv
Top 0.1%
5.1%
Show abstract

The goal of this study is to develop a classification model that can accurately and efficiently label human kinematics data. Kinematics data provides information about the movement of individuals by placing sensors on the human body and tracking their velocity, acceleration and position in three dimensions. These data points are available in C3D format that contains numerical data transformed from 3D data captured from the sensors. The data points can be used to analyse movements of injured patients or patients with physical disorders. To get an accurate view of the movements, the datasets generated by the sensors need to be properly labelled. Due to inconsistencies in the data capture process, there are instances where the markers have missing data or missing labels. The missing labels are a hindrance in motion analysis as it introduces noise and produces incomplete datapoints of sensors positioning in 3 dimensional space. Labelling the data manually introduces substantial effort in the analysis process. In this paper, we will describe approaches to pre-process the kinematics data from its raw format and label the data points with missing markers using classification models.

4
morse: an R-package in support of Environmental Risk Assessment

Charles, S.; Baudrot, V.

2021-06-02 ecology 10.1101/2021.04.07.438826 medRxiv
Top 0.1%
4.9%
Show abstract

Package morse is devoted to the analysis of experimental data collected from standard toxicity tests. It provides ready-to-use functions to visualize a data set and to estimate several toxicity indices to be further used in support of environmental risk assessment in full compliance with regulatory requirements. Such toxicity indices are indeed classical requested by standardized regulatory guidelines on which national agencies base their evaluation of applications for marketing authorisation of chemical active substances. Package morse can be used to get estimates of LCx (x% Lethal Concentration) or ECx (x% Effective Concentration) by fitting standard exposure-response models on toxicity test data. Risk indicator estimates as well as model parameters are provided along with the quantification of their uncertainty. Package morse can also be used to get estimates of the NEC (No Effect Concentration) by fitting a Toxicokinetic-Toxicodynamic (TKTD) model (namely GUTS models, that is General Unified Threshold models of Survival). Using GUTS models also allow to get estimates of LC(x,t) (whatever x and t) and LP(x,t), this later being defined by EFSA as the x% multiplication factor leading to an additional reduction of x% in survival at the end of the exposure profile. Above all, GUTS models can be used on data collected under time-variable exposure profiles. This paper illustrates a typical use of morse with survival data collected over time and at different increasing exposure concentrations, analysed with the reduced version of GUTS models based on the stochastic death hypothesis (namely, the GUTS-RED-SD model). This example can be followed step-by-step to analyse any new data set, as long as the data set format is respected.

5
No observed effect concentration (NOEC) and minimal effective dose (MED): estimation of non-experimental doses

Hothorn, L. A.

2023-08-24 pharmacology and toxicology 10.1101/2023.08.23.554562 medRxiv
Top 0.1%
4.8%
Show abstract

In in-vitro or in-vivo bioassays, the no observed effect concentration (NOEC) is often determined. This simple procedure has several disadvantages, including the limitation of being able to estimate only experimental doses. Linear interpolation between adjacent doses overcomes this drawback while maintaining the level of a familywise error rate (FWER) using multiple contrast tests.

6
A microvolume method for measuring catalase activity

Rocha, B. C.; Rosa, M. T.; Rocha, J. B. T.; Loreto, E. L. S.

2025-11-09 biochemistry 10.1101/2025.11.07.687299 medRxiv
Top 0.1%
4.5%
Show abstract

We have developed and validated an innovative protocol for analyzing catalase activity in microvolumes using the NanoDrop spectrophotometer. This method offers a solution to the challenge of working with limited biological samples and provides an efficient alternative to conventional protocols that require larger sample volumes. Unlike typical microplate assays that aim to increase throughput or reduce costs, our protocol was developed specifically for scenarios where biological material is scarce, such as studies with small organisms like Panagrellus redivivus and Caenorhabditis elegans, as well as with certain tissues of Drosophila and other small organisms. A key advantage of the method described here is the ability to accurately measure catalase activity with as little as 2 L of sample, making it ideal for studies where sample availability is extremely limited. The results show that the protocol effectively assesses catalase efficiency and reflects the physiological and metabolic properties of the tissues studied. Inhibitors and denaturants were used to ensure specificity of catalase measurements and the method was optimized for minimal reagent consumption. This approach greatly expands the research possibilities on enzymatic activity in reduced biological models, especially in contexts where small samples are critical, such as limited tissue collections or small organisms Graphical abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=140 SRC="FIGDIR/small/687299v1_ufig1.gif" ALT="Figure 1"> View larger version (46K): org.highwire.dtl.DTLVardef@9334fborg.highwire.dtl.DTLVardef@7b66baorg.highwire.dtl.DTLVardef@1957c28org.highwire.dtl.DTLVardef@10a44a7_HPS_FORMAT_FIGEXP M_FIG C_FIG

7
Protocol for assessing DNA damage levels in vivo in rodent testicular germ cells in the Alkaline Comet Assay

Olsen, A.-K.; Ma, X.; Zheng, C.; Dahl, H.; Dirven, Y.; Boisen, A. M.; Sharma, A.; Eide, D. M.; Brunborg, G.

2025-02-03 pharmacology and toxicology 10.1101/2024.12.22.624648 medRxiv
Top 0.1%
4.3%
Show abstract

Few protocols are available for retrieving male germ cell-specific information on DNA damage dynamics or assessing the potential harmful effects of environmental contaminants, drugs, or lifestyle factors related to genotoxicity. Here, we present a protocol for evaluating testicular germ cell genotoxicity using a modified version of the alkaline comet assay in rodent testicular germ cells. The protocol includes experimental design, preparation of testicular cell suspensions, comet analysis, germ cell-specific scoring, and data curation methods to collect information on testicular male germ cells, specifically spermatids and primary spermatocytes. For complete details on the use of this protocol please refer to Olsen et al., in preparation HighlightsO_LIDNA damage levels can be specifically measured in testicular germ cells using the developed revised version of the alkaline comet assay C_LIO_LI1C spermatids as well as 4C primary spermatocytes can be assessed C_LIO_LIBoth manual- and modeling-based approaches were developed that facilitate user-friendly protocols to select 1C spermatid comets. C_LIO_LIThis protocol expands the limited methodologies available to study germ cell DNA damage dynamics C_LI O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=200 SRC="FIGDIR/small/624648v7_ufig1.gif" ALT="Figure 1"> View larger version (43K): org.highwire.dtl.DTLVardef@1c18b1eorg.highwire.dtl.DTLVardef@1949f12org.highwire.dtl.DTLVardef@576e1eorg.highwire.dtl.DTLVardef@1feb14b_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOGraphical abstractC_FLOATNO C_FIG

8
FishFeats: streamlined quantification of multimodal labeling at the single-cell level in 3D tissues

Letort, G.; Foley, T.; Mignerey, I.; Bally-Cuif, L.; Dray, N.

2025-09-04 developmental biology 10.1101/2025.09.02.673708 medRxiv
Top 0.1%
4.2%
Show abstract

SummaryCharacterizing the distribution of biological marker expression at the single cell level in whole tissues requires diverse image analysis steps, such as segmentation of cells and nuclei, detection of RNA transcripts (or other staining), or their integration (e.g., assigning nuclei and RNA dots to their corresponding cell). Several software programs or algorithms have been developed for each step independently, but integrating them into a comprehensive pipeline for the quantification of individual cells from 3D imaging samples remains a significant challenge. We developed FishFeats, an open-source and flexible Napari (Sofroniew et al. 2025) plugin, to perform all of these steps together within the same framework, taking advantage of available and efficient software applications. The primary core of our pipeline is to propose a user-friendly tool for users who do not have a computational background. FishFeats streamlines extracting quantitative information from multimodal 3D fluorescent microscopy images (smFISH expression in individual cells, immunohistochemical staining, cell morphologies, cell classification) to a unified "cell-by-cell" table for downstream analysis, without requiring any coding. Our second focus is to propose and ease manual correction of each step, as measurement accuracy can be very sensitive to small errors in the automatic process. Availability and implementationFishFeats is open source under the BSD-3 license, freely available on github: https://github.com/gletort/FishFeats. FishFeats is developed in python, as a Napari plugin for the user interface. Documentation is available in the github pages: https://gletort.github.io/FishFeats/.

9
Data cleaning for image-based profiling enhancement

Rohban, M. H.; Bigverdi, M.; Rezvani, A.

2021-09-10 cell biology 10.1101/2021.09.09.459624 medRxiv
Top 0.1%
4.2%
Show abstract

With the advent of high-throughput assays, a large number of biological experiments can be carried out. Image-based assays are among the most accessible and inexpensive technologies for this purpose. Indeed, these assays have proved to be effective in characterizing unknown functions of genes and small molecules. Image analysis pipelines have a pivotal role in translating raw images that are captured in such assays into useful and compact representation, also known as measurements. CellProfiler is a popular and commonly used tool for this purpose through providing readily available modules for the cell/nuclei segmentation, and making various measurements, or features, for each cell/nuclei. Single cell features are then aggregated for each treatment replica to form treatment "profiles." However, there may be several sources of error in the CellProfiler quantification pipeline that affects the downstream analysis that is performed on the profiles. In this work, we examined various preprocessing approaches to improve the profiles. We consider identification of drug mechanisms of action as the downstream task to evaluate such preprocessing approaches. Our enhancement steps mainly consist of data cleaning, cell level outlier detection, toxic drug detection, and regressing out the cell area from all other features, as many of them are widely affected by the cell area. We also examined unsupervised and weakly-supervised deep learning based methods to reduce the feature dimensionality, and finally suggest possible avenues for future research.

10
Deciding between one-step and two-step irreversible inhibition mechanisms on the basis of "kobs" data: A statistical approach

Kuzmic, P.

2020-06-09 biochemistry 10.1101/2020.06.08.140160 medRxiv
Top 0.1%
4.1%
Show abstract

This paper describes an objective statistical approach that can be used to decide between two alternate kinetic mechanisms of covalent enzyme inhibition from kinetic experiments based on the standard "kobs" method. The two alternatives are either a two-step kinetic mechanism, which involves a reversibly formed noncovalent intermediate, or a one-step kinetic mechanism, proceeding in a single bimolecular step. Recently published experimental data [Hopper et al. (2020) J. Pharm. Exp. Therap. 372, 331-338] on the irreversible inhibition of Bruton tyrosine kinase (BTK) and tyrosine kinase expressed in hepatocellular carcinoma (TEC) by ibrutinib (PCI-32765) and acalabrutinib are used as an illustrative example. The results show that the kinetic mechanism of inhibition was misdiagnosed in the original publication for at least one of the four enzyme/inhibitor combinations. In particular, based on the available kobs data, ibrutinib behaves effectively as a one-step inhibitor of the TEC enzyme, which means that it is not possible to reliably determine either the inhibition constant Ki or the inactivation rate constant kinact, but only the covalent efficiency constant keff = kinact/Ki. Thus, the published values of Ki and kinact for this system are not statistically valid.

11
Various Optimization Strategies for the Isolation of Mitochondria from Sprague-Dawley Rat Liver Tissue

Roth, C.; Vadovsky, A.; Xia, T.; Bazil, J. N.

2025-08-01 biochemistry 10.1101/2025.07.30.667653 medRxiv
Top 0.1%
4.1%
Show abstract

Mitochondria are key organelles that establish a very large free energy drop for the ATP hydrolysis reaction in the cytoplasm of cells, which provides energy required for the cell maintenance of homeostasis. They are commonly isolated from living tissue as it makes them easier to interrogate at the biochemical level. Thus, isolated mitochondria of sufficient quality are desired to improve translational impact. To gain a greater insight into the isolation process and optimize our current isolation protocol for quality, we implemented various fine-tuning modifications to our standard homogenization, centrifugation, and purification steps with isolated rat hepatocyte mitochondria. The following modifications we tested were: i) different homogenization speeds (10,000, 14,000, and 18,000 rpm) with varying time intervals (10, 20, and 30 sec), ii) addition of an additional purification spin, and iii) use of density gradients to further purify the isolated mitochondria from non-mitochondrial contaminants. Mitochondrial quality was approximated using the well-established respiratory control ratio (RCR). The data reveal that our original protocol yields isolated mitochondria with acceptable quality and the optimization attempts produced similar or worse mitochondrial isolates. In addition, an extra purification spin decreased RCR values and therefore is not recommended. We note that the use of density gradients did not improve the RCR, but it did remove presumed peroxisomal contamination. While this protocol can be further enhanced using additional metrics, the data indicate that our current isolation protocol is sufficiently effective since additional modifications did not yield major improvements.

12
An optimised method for recovery and quantification of laboratory generated SARS-CoV-2 aerosols by plaque assay.

Byrne, R. L.; Gould, S.; Edwards, T.; Wooding, D.; Atkinson, B.; Moore, G.; Collings, K.; Boisdon, C.; Maher, S.; Biagini, G.; Adams, E. R.; Fletcher, T.; Pennington, S. H.

2022-11-01 molecular biology 10.1101/2022.10.31.514483 medRxiv
Top 0.1%
3.8%
Show abstract

We present an optimised method for the recovery of laboratory generated SARS-CoV-2 virus by plaque assay. This method allows easy incorporation into existing standard operating procedures of biological containment level 3 (BCL3) laboratories.

13
FetalBoneData: an R data package collating raw measurements of fetal bones across different gestational stages

O'Mahoney, T. G.; Vakil Kumar, J.

2025-09-01 developmental biology 10.1101/2025.08.28.672847 medRxiv
Top 0.1%
3.8%
Show abstract

ObjectivesRaw data of fetal measurements is often difficult to track down in the literature, and researchers are often limited to comparing their own original data to summary tables in synthetic volumes, or investing considerable time and resources into collecting this data themselves. Here, we present the R data package FetalBoneData, which we hope will improve access to such datasets. MethodsData was sourced from the literature (primarily Fazekas and Kosas (1978), which is long out of print) and work by the lead author. This was collated into a series of.csv files, before being put together into an R data package. ResultsWe apply the data in this package to compare the measurements of the humerus in a 19th Century fetal collection (Liverpool fetal collection, this paper) against that reported by Fazekas and Kosa (1978) as a case study of the utility of the package. DiscussionThe benefit of publishing such data in an open-source format, easily accessible through a popular statistical package, can significantly improve the availability of this type of data. It is hoped that data will be continuously added to the package, further improving its utilization.

14
Hough transform implementation to evaluate the morphological variability of the moon jellyfish (Aurelia sp.)

Gadreaud, J.; Lacaux, C.; Desolneux, A.; Martin-Garin, B.; Thiéry, A.

2020-03-11 developmental biology 10.1101/2020.03.11.986984 medRxiv
Top 0.1%
3.8%
Show abstract

AO_SCPLOWBSTRACTC_SCPLOWVariations of the animal body plan morphology and morphometry can be used as prognostic tools of their habitat quality. The potential of the moon jellyfish (Aurelia spp.) as a new model organism has been poorly tested. However, as a tetramerous symmetry organism, it exhibits some variations in radial symmetry number. A pertinent list of morphological - number of gonads - and morphometric characteristics - e.g. ratio of the gonads area on the umbrella area - has been established to describe the morphology of 19 specimens through an image analysis. The method uses for the first time the Hough transform to approximate the gonads and the umbrella by ellipses and automatically extracts the morphometric data. A statistical comparison has been done to compare the morphometric characteristics of tetramerous jellyfish and of jellyfish with 5 gonads: it is only provided as a first step for testing biological hypotheses, since the small size of the data set leads to relativize its conclusions. It suggests that two parameters are discriminant: distance between the center of the gonads and the center of the umbrella, and the individual variability of the gonad eccentricity, both higher in jellyfish with 5 gonads. Additionally, the relative size of the gonads does not seem to be different between tetramerous and non-tetramerous. Combined to ecotoxicological bioassays to better understand the causes of this developmental alteration, this optimizable method can become a powerful tool in the symmetry description of an in situ population.

15
Advancing the Application of pXRF for Biological Samples

Brandis, K.; Francis, R.

2024-01-17 biochemistry 10.1101/2024.01.16.575873 medRxiv
Top 0.1%
3.7%
Show abstract

Point 1: Portable x-ray fluorescent (pXRF) technology provides significant opportunities for rapid, non-destructive data collection in a range of fields of study. However, there are sources of variation and sample assumptions that may influence the data obtained, particularly in biological samples. Point 2: We used representative species for four taxa (fish, mammals, birds, reptiles) to test the precision of replicate scans, and the impact of sample thickness, sample state, scan location and scan time on data obtained from a pXRF. Point 3: We detected significant differences in concentration data due to sample state, scanning time and scanning location for all taxa. Infinite thickness assumptions were met for fish, reptile and mammal representatives at all body locations when samples were thawed, but not dried. Infinite thickness was not met for feathers. Scan time results found in most cases the 40, 60 and 80 second beam times were equivalent. Concentration data across replicate scans were highly correlated. Point 4: The opportunities for the use of pXRF in biological studies are wide-ranging. These findings highlight the considerations required when scanning biological samples to ensure the required data are suitably collected, while maintaining minimal radiation exposure to live animals.

16
A platform for lab management, note-keeping and automation

Fleiss, A.; Mishin, A. S.; Sarkisyan, K. S.

2024-07-11 molecular biology Community evaluation 10.1101/2024.07.08.602487 medRxiv
Top 0.1%
3.7%
Show abstract

We report a lab management concept and its no-code implementation based on general-purpose database services, such as Airtable. The solution we describe allows for integrated management of samples, lab procedures, experimental notes and data within a single browser-based application, and supports custom automations. We believe that this system can benefit a wide scientific audience by offering communication-less retrieval of information, collaborative editing, unified sample labelling and data keeping style. A template database is available at airtable.com/universe/expPcKlB7VCHE6wVK/lab-management.

17
A Synthetic Single-Stranded Dual-Template Oligonucleotide as a Reference Standard for Monochrome Multiplex qPCR Measurements of Average Telomere Length

Cawthon, R. M.

2020-06-25 molecular biology 10.1101/2020.06.24.169797 medRxiv
Top 0.1%
3.6%
Show abstract

Quantitative PCR is frequently used to measure average telomere length (TL) relative to the TL of a reference DNA sample of the investigators choosing. This makes comparisons of TLs across studies and laboratories difficult. Here we demonstrate that a single synthetic single-stranded dual-template oligonucleotide (DTO) containing both a telomere repeat sequence (T) and a segment of the human beta-globin (HBB) single copy gene (S) can be used as a universal reference standard for monochrome multiplex quantitative PCR (MMqPCR) measurements of average TL using SYBR Green I as the only fluorescent reporter dye. A set of twelve concentrations of the DTO is prepared by serial 3-fold dilutions, to a lowest concentration of ~20 copies per l. The 5 highest concentrations are used for the T standard curve, and the 5 lowest concentrations are used for the S standard curve. For each reaction 5 l containing approximately 3 ng of genomic DNA (or one of the DTO dilutions) is mixed with 5 l of a 2x MasterMix containing the primers for T and S amplification, and MMqPCR is performed. The design of the primers and thermal cycling profile allows all T amplification signals to be collected before exponential amplification of the S signal begins. Exponential amplification from S is then carried out in a temperature range that keeps the telomere product fully melted and therefore unable to influence the S amplification signal. The T value for each DNA sample is the Standard Curve DTO dilution that contains the same number of copies of the telomere sequence as the experimental sample, and the S value is the DTO dilution that contains the same number of copies of the single copy gene sequence as the experimental sample. Dividing the first dilution by the second dilution yields an absolute T/S ratio, since it is expressed relative to the fixed 1:1 T/S ratio that is built into the DTO by design. Absolute T/S ratios for average TL in 48 human DNA samples determined by this method correlated strongly with mean Terminal Restriction Fragment (mTRF) lengths for the same DNA samples determined by the Southern Blot method (R-squared = 0.801). This DTO and the accompanying protocol may facilitate the standardization of average telomere length measurements and analyses across laboratories.

18
Redoxyme: a lightweight graphical user interface for standardized calculation of antioxidant enzyme activities

Soares, G. C. d. F.; Varella, A. L. N.; Facundo, H. T.

2026-02-05 biochemistry 10.64898/2026.02.05.703993 medRxiv
Top 0.1%
3.6%
Show abstract

Oxidative stress results from excessive accumulation of reactive oxygen species (ROS) and plays a central role in numerous physiological and pathological processes. Accurate quantification of antioxidant enzyme activities is therefore essential in redox biology research. However, data analysis for commonly used assays, such as superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx), is frequently performed using spreadsheets or manual calculations, which are time-consuming and prone to error. Here, we present Redoxyme, a free, open-source, Python-based graphical user interface designed to standardize and automate the calculation of antioxidant enzyme activities. The software integrates protein normalization, enzyme-specific calculation routines, data visualization, and Excel export within an intuitive interface that does not require programming expertise. Redoxyme was validated using experimental data obtained from animal tissues (rats and mice), demonstrating excellent agreement with manual calculations and established analytical methods. Redoxyme provides a practical solution for improving reproducibility and efficiency in antioxidant enzyme activity analysis. The software is currently distributed as a standalone executable for Windows (locally installed), and an interactive web-based calculator implemented in Streamlit, enabling direct use without local installation. The source code and version-controlled development history are openly accessible via GitHub, promoting transparency, reproducibility, community-driven improvements, and can, in principle, be adapted for other operating systems. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=63 SRC="FIGDIR/small/703993v2_ufig1.gif" ALT="Figure 1"> View larger version (10K): org.highwire.dtl.DTLVardef@120cc68org.highwire.dtl.DTLVardef@4be246org.highwire.dtl.DTLVardef@1f47134org.highwire.dtl.DTLVardef@1341100_HPS_FORMAT_FIGEXP M_FIG C_FIG

19
The SENSOR System: Using Standardized Data Entry and Dashboards for Review of Scientific Studies on the Utility of Blood-Based Protein Biomarkers for Patients with Mild Brain Injury

Aggerwal, S.; Safi, T.; Beliveau, P.; Gupta, G.

2023-01-12 rehabilitation medicine and physical therapy 10.1101/2023.01.10.23284296 medRxiv
Top 0.1%
3.6%
Show abstract

BackgroundThere is no objective way of diagnosing or prognosticating acute traumatic brain injuries (TBIs). A systematic review conducted by Mondello et al. reviewed studies looking at blood based protein biomarkers in the context of acute mild traumatic brain injuries and correlation to results of computed tomography scanning. This paper provides a summary of this same literature using the SENSOR system. MethodsAn existing review written by Mondello et al. was selected to apply the previously described SENSOR system (Kamal et al.) that uses a systematic process made up of a Google Form for data intake, Google Drive for article access, and Google Sheets for the creation of the dashboard. The dashboard consisted of a map, bubble graphs, multiple score charts, and a pivot table to facilitate the presentation of data. ResultsA total of 29 entries were inputted by two team members. Sensitivities, specificities, positive predictive values (PPVs), negative predictive values (NPVs), demographics, cut-off levels, biomarker levels, and assay ranges were analyzed and presented in this study. S100B and GFAP biomarkers may provide good clinical utility, whereas UCH-L1, C-Tau, and NSE do not. DiscussionThis study determined the feasibility and reliability of multiple biomarkers (S100B, UCH-L1, GFAP, C-tau, and NSE) in predicting traumatic brain lesions on CT scans, in mTBI patients, using the SENSOR system. Many potential limitations exist for the existing literature including controlling for known confounders for mild traumatic brain injuries. ConclusionThe SENSOR system is an adaptable, dynamic, and graphical display of scientific studies that has many benefits, which may still require further validation. Certain protein biomarkers may be helpful in deciding which patients with mTBIs require CT scans, but impact on prognosis is still not clear based on the available literature.

20
Interpreting Biomarker Test Results for Alzheimer Disease, Parkinson Disease and Other Neurodegenerative Diseases Without the Autopsy Gold Standard

Zhang, N.; Adler, C. H.; Atri, A.; Aslam, S.; Serrano, G. E.; Beach, T. G.; Chen, K.

2025-04-28 neurology 10.1101/2025.04.23.25326286 medRxiv
Top 0.1%
3.6%
Show abstract

There is an extensive literature on the difficulty in assessing new diagnostic tests when an accurate gold standard does not exist, is imperfect, or is not available. Relatively few reports, however, address this problem when it confronts researchers reporting tests of potential new biomarkers for Alzheimer disease (AD), Parkinson disease (PD) and other neurodegenerative diseases. This is despite the reality that the vast majority of published studies employ the neurologists clinical diagnoses as the gold standard, despite their well-known inherent inaccuracies relative to the true, gold standard, autopsy neuropathology diagnosis. More recently, biomarkers that have, appropriately, been evaluated against the autopsy gold standard, have then themselves been used as a surrogate gold standard to evaluate other biomarkers, despite their less-than-perfect accuracy against autopsy. The shortcomings of these approaches to neurodegenerative disease biomarker validation are rarely discussed. It is clear from the prior literature that testing the accuracy of a new AD or PD diagnostic test against the clinical diagnosis can in fact lead to both underestimates and overestimates of its true (against autopsy) accuracy. Despite these problems, the clinical diagnoses of AD and PD are still routinely used to assess the accuracy of new biomarkers. A related issue is that when a new biomarker is evaluated against a test type previously validated against autopsy (e.g. amyloid PET tau PET), in effect serving as a surrogate gold standard, it is again clear that this new biomarker might itself be more accurate, as accurate or less accurate than the surrogate gold standard. We here present a method that makes it possible, if there are published data on the accuracy of the clinical diagnosis, or of a surrogate gold standard, relative to autopsy, to at least estimate a range of possible accuracies of a new biomarker using those imperfect gold standards. Our procedure was developed using basic theoretical modeling of conditional probabilities.