Nature Protocols
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Nature Protocols's content profile, based on 30 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
George, B.; Kirkpatrick, B. Q.; Zhang, Q.
Show abstract
Nuclei isolation from myelin-rich adult mouse brain regions remains challenging for single-nucleus RNA sequencing because myelin and debris can reduce nuclei quality. We describe an optimized protocol for mouse hippocampi and cerebella using tube-and-pestle homogenization and low-volume sucrose-gradient pelleting with a standard benchtop centrifuge, with optional magnetic enrichment of nuclei to reduce debris/non-nuclear carryover. Under the tested conditions, the workflow produces intact, debris-reduced nuclei and supports downstream 10x Genomics Flex and PARSE WT library preparation. Graphical abstract O_FIG O_LINKSMALLFIG WIDTH=196 HEIGHT=200 SRC="FIGDIR/small/716374v1_ufig1.gif" ALT="Figure 1"> View larger version (35K): org.highwire.dtl.DTLVardef@ccbd87org.highwire.dtl.DTLVardef@1aef4bcorg.highwire.dtl.DTLVardef@14569a8org.highwire.dtl.DTLVardef@1bc261_HPS_FORMAT_FIGEXP M_FIG C_FIG HighlightsO_LIBenchtop sucrose-gradient pelleting enables rapid nuclei purification from myelin-rich adult mouse brain C_LIO_LIScales across tissue inputs (e.g., hippocampus [~]15-20 mg; cerebellum [~]50-70 mg) without ultracentrifugation or 15 mL gradients C_LIO_LIMagnetic enrichment as the recommended final cleanup step further reduces myelin/debris carryover and is compatible with 10x Flex and PARSE WT workflows. C_LI
Arnaiz del Pozo, C.; Sanchis-Lopez, C.; Huerta-Cepas, J.
Show abstract
SummaryThe combination of target capture metagenomics and long-read sequencing represents a powerful approach for the characterisation of rare microbial taxa and their functional genes. However, standard Nanopore library preparations are incompatible with established capture protocols. A possible workaround is the preparation of Illumina libraries prior to ONT sequencing. Currently, this hybrid approach is hindered by a lack of specialised demultiplexing software capable of handling residual adapter fragments; Nanopores higher error rates and positional variability. Here, we present deluxpore: a Nextflow pipeline that demultiplexes Nanopore reads from Illumina dual-indexed libraries (NEBNext and Nextera) using BLAST alignment and Levenshtein distance matching. Extensive benchmarking across 18 replicates validates the viability and precision of this hybrid indexing approach. Benchmarking demonstrates that accurate demultiplexing requires minimum Q20 data quality and strategic index selection. Unique index-to-sample designs achieved 91.7% sample recovery at Q20 versus 46.1% for combinatorial approaches. We also identified high-crosstalk index pairs within NEBNext Primer Set A and provide an optimized 8-sample configuration achieving ~95% accuracy at Q20. deluxpore enables reliable, automated demultiplexing for hybrid capture-long-read sequencing workflows. Availability and implementationdeluxpore is implemented in Nextflow, Python, and Bash under the GNU GPL v3.0. Source code, documentation, and benchmarking workflows are available at https://github.com/compgenomicslab/deluxpore and https://github.com/compgenomicslab/deluxpore-benchmarking.
Gupta, A.; Struba, A. Z.; Madhavan, S.; Strayer, E.; Beaudoin, J.-D.
Show abstract
The translation of mRNA into protein is tightly regulated by both cellular trans-factors and cis-regulatory elements encoded within transcripts. Although transcript fate can be measured by transcript abundance or translation efficiency, separating the contribution of each individual cis-element within a single transcript is an ongoing challenge. Current massively parallel reporter assay (MPRAs) approaches enable systematic interrogation of cis-regulatory elements that control transcript stability, but translation-focused MPRAs remain technically limited and often inaccessible. Here we present Nascent Peptide Translating Ribosome Affinity Purification (NaP-TRAP), a reporter-based approach that simultaneously measures translation and mRNA abundance. Unlike previous methods, NaP-TRAP captures translation directly through the immunoprecipitation of epitope-tagged nascent peptide chains, providing instantaneous, frame-specific readouts without specialized instrumentation. The method is highly scalable from single reporters to complex libraries, and adaptable across in vivo and in vitro systems. NaP-TRAP is versatile, allowing assessment of cis-regulatory impact of elements distributed throughout the mRNA, from cap-to-tail. This protocol covers experimental design, reporter construction, sample processing, and computational analysis for both low- and high-throughput applications. Bench work can be completed in 4- 5 days, with qPCR-based readouts requiring only basic Excel skills for data processing. Sequencing-based readouts require skills in command-line tools and Python scripting and add an additional 2-3 days. NaP-TRAP thus offers an accessible, robust, and quantitative platform to decode the regulatory logic of mRNA translation and stability in diverse biological contexts. Basic Protocol 1Design, assembly, and synthesis of NaP-TRAP reporter libraries. Support Protocol 1Design, assembly, and synthesis of NaP-TRAP individual reporters and spike-ins. Basic Protocol 2NaP-TRAP delivery by micro-injection in zebrafish embryos. Alternate Protocol 1NaP-TRAP delivery by transfection in cultured mammalian cells. Basic Protocol 3NaP-TRAP pulldown and RNA extraction. Basic Protocol 4Preparation of NaP-TRAP cDNA Sequencing Libraries. Alternate Protocol 2NaP-TRAP-qPCR module for low-cost validation. Basic Protocol 5Computational analysis of NaP-TRAP MPRA data.
Antony, F.; Bhattacharya, A.; Duong van Hoa, F.
Show abstract
Peptergent is a novel class of amphipathic peptides that enable detergent-free extraction and purification of membrane proteins (MPs). These designed peptides self-assemble around hydrophobic transmembrane regions of proteins, forming stable, water-soluble assemblies that can be isolated directly from biological membranes. By doing so, Peptergent bypass the limitations imposed by traditional detergents, which often destabilize proteins and restrict downstream analyses. Since detergents are completely avoided, Peptergent-isolated MPs are directly amenable to structural and mass spectrometry (MS) analysis, thereby addressing their persistent underrepresentation in proteomic datasets and improving their accessibility for drug-screening strategies. Here, we describe a streamlined protocol for isolating MPs with the Peptergent PDET-1, followed by exchange into His-tagged Peptidiscs for Ni-NTA-based affinity purification. The method comprises membrane isolation, peptide preparation, protein extraction, clarification, and exchange of MPs from Peptergent to Peptidiscs. Application of this workflow yields enriched membrane proteomes compatible with downstream LC-MS/MS analysis, with improved recovery of hydrophobic and multi-pass membrane proteins. Key featuresO_LIDirect extraction and solubilization of membrane proteins in Peptergents C_LIO_LIExchange into His-tagged Peptidiscs enabling affinity purification of MPs C_LIO_LI100% detergent-free workflow compatible with LC-MS/MS analysis C_LIO_LIApplicable to cultured cells and tissue-derived membrane fractions C_LI In BriefWe describe a Peptergent-based workflow for isolating membrane proteins directly from membrane preparations. Proteins are extracted with the Peptergent peptide scaffold (PDET-1) and transferred into His-tagged Peptidisc (HD-43). The water-soluble membrane proteins are enriched by Ni-NTA affinity purification and prepared for bottom-up mass spectrometry, yielding enriched membrane proteomes and dried peptide samples ready for LC-MS analysis Graphical Overview O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=128 SRC="FIGDIR/small/711971v1_ufig1.gif" ALT="Figure 1"> View larger version (36K): org.highwire.dtl.DTLVardef@af3241org.highwire.dtl.DTLVardef@c6a94org.highwire.dtl.DTLVardef@129322aorg.highwire.dtl.DTLVardef@19c7c9d_HPS_FORMAT_FIGEXP M_FIG C_FIG
Koderman, M.; Pilarski, J.; Bianco, E.; Gonzalez, D.; Robinson, M. D.; Macnair, W.
Show abstract
MotivationThe transition toward "atlas-scale" single cell research has resulted in datasets comprising millions of cells across hundreds of samples, creating significant challenges for data management, computational efficiency, and reproducibility. While numerous methods are available for individual steps in single cell data processing, the highly complex nature of the analysis makes it challenging to maintain a clear record of every tool and parameter used. This makes final results difficult to reproduce, highlighting the need for a unified workflow that integrates multiple steps into a cohesive framework. Resultsscprocess is a Snakemake pipeline designed to streamline and automate the complex steps involved in processing single cell RNA sequencing data. Specifically optimized for data generated using the 10x Genomics technology, it provides a comprehensive solution that transforms raw sequencing files into standardized outputs suitable for a variety of downstream tasks. The pipeline is built to support the analysis of datasets comprising multiple (e.g. 100+) samples via a simple CLI, allowing researchers to efficiently explore their datasets while ensuring reproducibility and scalability in their workflows. Availability and implementationscprocess can be installed via GitHub (https://github.com/marusakod/scprocess) under the MIT license. Documentation, including setup instructions and tutorials on example datasets is available at https://marusakod.github.io/scprocess/.
Golas, S. M.; Gill, B.; Wardlow, K.; Baydush, A.; Linzbach, J.; Chory, E. J.
Show abstract
The expanding scope of laboratory automation increasingly demands systems that can be tailored to specific experimental constraints, including footprint, timing, cost, and control. While open-source software has improved protocol flexibility, liquid-handling hardware itself remains largely closed, limiting the ability of academic and startup laboratories to build instruments around biological requirements rather than vendor defaults. Here, we present a fully open-source, purpose-built liquid-handling robot assembled from commercially available components and developed entirely in a research setting. The platform integrates open hardware, electronics, and a Python-based control stack compatible with PyLabRobot, exposing low-level motion dynamics and liquid-handling behaviors directly to experiment code. We validate the system using a high-throughput turbidostat workflow that requires rapid, closed-loop measurement and actuation to maintain microbial cultures at defined density setpoints. The robot sustains stable steady-state growth across approximately 200 cultures with heterogeneous growth dynamics. A replica build completed by two lab members in approximately one week confirms that the platform can be reproduced from its bill of materials and assembly guide. Its compact footprint and use of off-the-shelf components make it suitable for rapid, parallel deployment in settings such as public health emergencies or by distributed laboratories. Together, these results demonstrate that industry-class liquid handlers can be custom-built for specific experimental goals, establishing a blueprint for open, purpose-driven hardware development across research and industrial automation contexts. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=132 SRC="FIGDIR/small/709168v1_ufig1.gif" ALT="Figure 1"> View larger version (63K): org.highwire.dtl.DTLVardef@1b2cb4eorg.highwire.dtl.DTLVardef@1418e8dorg.highwire.dtl.DTLVardef@f60618org.highwire.dtl.DTLVardef@a3d3b_HPS_FORMAT_FIGEXP M_FIG Open Liquid Handler (OLH) Design Goals. Left: Design goals for a purpose-built platform for time-sensitive, closed-loop biological workflows, emphasizing high-accuracy dosing (low variability liquid handling), rapid integrated measurement (plate deck and isolated workspace), customizable deck and peripheral options, compact footprint with high throughput, containment via an enclosed wet workspace for biosafety and sterility, and a replicable build using off-the-shelf OEM components with open design files. Right: Open Liquid Handler design and physical implementation, with aerial and front views highlighting the enclosed cabinet and the working envelope over a compact deck. C_FIG
Mears, J.; Orchard, P.; Varshney, A.; Bose, M. L.; Robertson, C. C.; Piper, M.; Pashos, E.; Dolgachev, V.; Manickam, N.; Jean, P.; Kitzman, D. W.; Fauman, E.; Damilano, F.; Roth Flach, R. J.; Nicklas, B.; Parker, S. C.
Show abstract
Short-read Illumina sequencing of 10x Genomics single-nucleus multiome libraries captures only the 3 end of RNA transcripts, losing transcription start site (TSS) information. Here we demonstrate nanopore sequencing of 10x multiome libraries, which enables the profiling of full length transcripts. We show concordance with common short-read sequencing based workflows including successful genetic demultiplexing of nanopore data despite its higher error rate. We compare TSS identified using nanopore sequencing of multiome cDNA to those identified using a short-read 5 assay, and provide an optimized approach for the preprocessing of nanopore reads prior to TSS identification. We find that nanopore sequencing of multiome cDNA captures a median of 63% of the TSS detected by the 5 assay.
Imada, T.; Shimizu, H.; Toya, Y.
Show abstract
13C-metabolic flux analysis (13C-MFA) is a crucial technique that experimentally determines metabolic flux distribution. Although precision of each flux strongly depends on tracer labeling pattern, its optimization remains challenging. We developed an integrated platform, OpenMebius2, a graphical user interface (GUI)-based software for 13C-MFA that includes a tracer labeling pattern suggestion function to support subsequent experiments. The proposed function leverages metabolic flux distributions and their 95 % confidence intervals obtained using low-cost 13C-labeled substrates to evaluate hypothetical parallel labeling scenarios and predict improvements in flux estimation precision. Availability and implementationThis software runs on Linux, macOS, and Windows. The source code and binary files are available at https://github.com/metabolic-engineering/OpenMebius2 under the PolyForm Noncommercial License 1.0.0.
Rostamian, H.; Madden, E. W.; Kaplan, F. M.; Kim, R.; Isom, D. G.; Strahl, B. D.
Show abstract
This protocol enables rapid CRISPR-Cas9 genome editing in Saccharomyces cerevisiae by replacing restriction/ligation guide cloning with PCR-based protospacer installation and seamless plasmid recircularization. It describes in silico HDR donor and SgRNA design, install guide sequences into cas9 plasmid by PCR and seamless assembly, plasmid cloning and sequence verification in E. coli, and LiAc/PEG co-transformation of yeast with Cas9-sgRNA plasmid plus HDR donor. The workflow selects yeast colonies on G418 and confirms edits by PCR and sequencing.
Elegheert, J.; Behiels, E.; Nair, A.; Doridant, A.
Show abstract
Lentiviral transduction of HEK293-derived expression cells provides a robust and scalable approach for large-scale protein production for structural and biochemical studies. Building on our previously reported platform, we introduce an improved workflow that decouples cell enrichment from target protein expression by enabling constitutive antibiotic selection of transduced cells prior to induction. The key advance is the use of orthogonal antibiotic-resistance cassettes to stringently enrich transduced cells, eliminate non-transduced cells, improve population homogeneity, and enable multi-vector co-selection for heteromeric assemblies and complexes. We provide two complementary transfer-vector suites. pHR-AB-CMV-TetO2 delivers maximal expression and supports inducible control in TetR-expressing lines while driving strong constitutive expression in non-TetR lines. pHR-AIO-AB ("all-in-one") encodes the transactivator, resistance marker, and gene of interest on a single construct to enable tightly controlled doxycycline-inducible expression in standard HEK293 lines, and is readily adaptable to other mammalian cell types. Both suites are available with puromycin, blasticidin, hygromycin, or zeocin markers, enabling straightforward co-infection and orthogonal multi-antibiotic selection of stable populations expressing multiple transgenes. They are well suited to demanding targets such as membrane proteins and multi-subunit assemblies. The protocol details the step-by-step generation of highly enriched, inducible HEK293 populations within 3-4 weeks.
Chen, Y.-K.; Harker, C. M.; Pham, C. M.; Grundy, L.; Wardill, H. R.; Roach, M. J.; Ryan, F. J.
Show abstract
Shotgun metagenomics has become a cornerstone of microbiome research, yet the complexity of existing workflows remains a major barrier for life scientists without dedicated bioinformatics support. Manual database setup, detailed sample sheet preparation, and management of software dependencies can make routine analysis difficult and time-consuming. Cross-study comparisons are further hampered by inconsistent processing pipelines, database versions, and profiling strategies, limiting reproducibility and the potential for large-scale meta-analyses. We present OpusTaxa, an open-source Snakemake workflow that provides end-to-end processing of short paired-end shotgun metagenomic data with minimal configuration. Users provide either FASTQ files or Sequence Read Archive accessions; OpusTaxa automatically downloads required databases, performs quality control, removes host reads, and executes taxonomic profiling, metagenome assembly, and functional analysis. All analysis modules can be independently toggled, and per-sample outputs are automatically merged into harmonised, cross-sample tables ready for downstream exploration. Across two public datasets, we demonstrate how OpusTaxa can be used to compare consistency across complementary taxonomic profilers and to estimate microbial load in addition to standard metagenomic workflows. AvailabilityOpusTaxa is freely available at https://github.com/yenkaiC/OpusTaxa. Documentation, test data, and example configurations are included in the repository.
Schroeder, L.; Gerber, S.; Ruffini, N.
Show abstract
BackgroundAmbient RNA contamination is a pervasive artifact of single-cell and single-nucleus RNA sequencing (sxRNA-seq), yet no consensus exists on which computational removal tool performs best across experimental platforms. ResultsWe present a systematic benchmark of six tools: CellBender, DecontX, SoupX, scCDC, scAR, and CellClear - evaluated across six human-mouse cell line mixing (hgmm) datasets (1k-20k cells) providing partial ground truth, two droplet-based complex tissue datasets (PBMC scRNA-seq; prefrontal cortex snRNA-seq), and a well-plate-based dataset (BD Rhapsody WBC). Using inter-species counts as partial ground truth, we quantify sensitivity, specificity, precision, and removal consistency per tool. We further apply a count-integrity criterion quantifying gene-cell positions where corrected values exceed raw counts. This reveals that scAR and CellClear do not merely denoise but fundamentally restructure count matrices: CellClear replaces >93% of counts with values derived from matrix factorization, while scAR generates spurious cell types absent from uncorrected data, including three spurious coarse cell types in the BD Rhapsody dataset and up to eight novel cell types in the prefrontal cortex. CellBender and SoupX exhibit reliable contamination removal with minimal count distortion. DecontX and scCDC are the only tools operable on non-droplet platforms without raw count matrix access. Runtime benchmarking at atlas scale (up to 172,000 nuclei) further demonstrates that CellClear fails to scale. ConclusionsCount matrix integrity, not removal sensitivity alone, must be a primary criterion when selecting ambient RNA correction tools. We provide platform-specific recommendations and a decision framework to guide tool selection across experimental contexts.
Djidrovski, I.
Show abstract
Computational toxicology increasingly relies on evidence, high-throughput screening, predictive (Q)SAR, adverse outcome pathways (AOPs), physiologically based kinetic (PBK/PBPK) models, and exposure databases to support integrated approaches to testing and assessment (IATA). Yet the practical workflow remains fragmented across heterogeneous tools, data formats, and licensing regimes. Large language models (LLMs) can lower the interface barrier, but free-text interaction alone is insufficient for regulatory-grade science: it is difficult to audit, difficult to reproduce, and prone to overconfident errors. Here we introduce ToxMCP, a collection of Model Context Protocol (MCP) servers designed as a guardrailed, federated integration layer for reproducible computational toxicology. ToxMCP wraps toxicology-relevant capabilities, including chemical identity and regulatory context (EPA CompTox), rapid ADMET profiling (ADMETlab 3.0), mechanistic pathway retrieval and structuring (AOP knowledge services), quantitative read-across workflows (OECD QSAR Toolbox), and mechanistic PBPK simulation (Open Systems Pharmacology Suite), as typed tools with explicit inputs/outputs, provenance bundles, and policy hooks (e.g., applicability domain checks, critical-action confirmation, and role-based access control). We demonstrate how natural-language risk questions can be compiled into auditable tool invocations, returning mechanistic metrics such as tissue AUC/Cmax, sensitivity curves, and conservative points of departure. We further outline an evaluation protocol for measuring computational reproducibility, task throughput, and scientific utility across multi-tool toxicology tasks. ToxMCP reframes LLMs for toxicology from conversational summarizers into accountable orchestrators of established scientific kernels, enabling faster iteration while preserving the evidentiary structure expected in regulatory and academic settings. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=110 SRC="FIGDIR/small/703989v1_ufig1.gif" ALT="Figure 1"> View larger version (52K): org.highwire.dtl.DTLVardef@1b8ccceorg.highwire.dtl.DTLVardef@18e0703org.highwire.dtl.DTLVardef@16e87feorg.highwire.dtl.DTLVardef@1a24f13_HPS_FORMAT_FIGEXP M_FIG C_FIG
Banerjee, T. D.; Raine, J.; Mathuru, A.; Monteiro, A.
Show abstract
Automation of multi-step mRNA imaging protocols increases reproducibility and throughput in spatial biology, as many workflows require repeated buffer exchanges, precise timing, and controlled reaction conditions. Commercial automation platforms can be expensive, proprietary, and difficult to customise, limiting their use in most laboratories. Here, we present two open-source robots for the Rapid Amplified Multiplexed Fluorescent In-Situ Hybridization (RAM-FISH) workflow based on programmable delivery of fluids and integrated thermal control with no dedicated bubble trap requirement. The first robot is designed to perform the steps necessary for signal localization (Multiplexer), and the second performs signal removal (RemBot). Both robots function without manual supervision and conduct precise, repeatable buffer exchanges, temperature regulation, and timed reactions. Both can operate on free-floating and gel-embedded tissues and can be assembled using widely available components. The robots support iterative imaging workflows, enabling detection of multiple genes across sequential hybridization rounds within the same sample. By providing customizable and accessible robots, we lower the technical know-how barriers that need to be overcome to perform complex spatial imaging experiments and enable scalable, hands-free execution of multi-step multiplex-FISH.
Gorin, G.; Guruge, D.; Goodman, L.
Show abstract
Rigorous experimental design, including formal power analysis, is a cornerstone of reproducible RNA sequencing (RNA-seq) research. The design of RNA-seq experiments requires computing the minimum sample number required to identify an effect of a particular size at a predefined significance level. Ideally, the statistical test used for the analysis of experimental data should match the test used for sample size determination; however, few tools use the assumptions of the popular differential expression testing framework DESeq2, and most opt for simulation-based rather than analytical approaches. Grounded in the DESeq2 model framework, we derive sample size requirements for both single-cell and bulk RNA-seq experiments delivered as a web-based tool for power analysis, DEPower, available at https://poweranalysis-fb.streamlit.app/ that makes rigorous RNA-seq study design accessible to all researchers.
Brekke, T. D.; Weeks, T.; Barber, R. A.; Thomson, I.; Gooda, R.; Gargiulo, R.; Delhaye, G.; Andrew, C.; Kowal, J.; Bidartondo, M.; Martinez-Suz, L.
Show abstract
Processing Sanger DNA sequences remains a routine yet technically demanding step in many biodiversity and ecological studies, particularly when barcoding large numbers of environmental samples. Manual inspection and editing of trace files, DNA sequence alignment, and classification using taxonomic reference databases is time-consuming, inconsistent, and prone to error. These challenges are compounded in studies involving degraded samples, in-house DNA sequencing, under-described taxa, or when investigators have limited access to computational tools. We present MycorrhizaTracer, an open-source, fully automated pipeline for processing and taxonomically classifying large batches of Sanger sequencing chromatograms. We have optimized it for fungal and plant taxa, but it is adaptable across the tree of life. The pipeline performs quality trimming, consensus generation from bidirectional reads, taxonomic classification via BLAST, clustering, optional salvaging of low-quality sequences, and functional annotation of fungal taxa. Designed for scalability and ease of use, MycorrhizaTracer can process thousands of DNA chromatograms in a matter of hours without the need for an HPC. Accuracy and ecological relevance are ensured by features such as gene region-specific taxonomic filtering and sequence-based clustering of unclassified reads. By streamlining trace-to-taxon workflows, MycorrhizaTracer reduces the burden of manual curation, supports reproducibility, and enables efficient recovery of biodiversity data from Sanger sequences - particularly in field-based or resource-limited research contexts.
Cortot, M.; Stehlik, T.; Koch, A.; Schlemmer, T.
Show abstract
Efficient protein synthesis in eukaryotic cells typically requires a 5' cap structure on messenger RNAs (mRNAs). However, under stress conditions or in viral infection, translation can also occur independently of the cap via internal ribosomal entry sites (IRES). IRES elements are therefore key regulators of protein expression in both viral and cellular contexts. Here we describe a cell-free protocol to quantitatively assess IRES-mediated translation using wheat germ extract (WGE) and a firefly luciferase (FLuc) reporter. The protocol includes template preparation, RNA synthesis and luminescence measurement following in vitro translation in WGE. This method enables rapid and robust comparison of IRES activity under controlled conditions and can additionally be applied to evaluate mRNA modifications designed to enhance translation efficiency. Key featuresO_LIStringent in vitro workflow from DNA template preparation through RNA synthesis and protein synthesis to reporter readout, including quality controls. C_LIO_LIEvaluation of IRES-driven translation suitable for testing combinations of IRES and CDS. C_LIO_LItranslation analysis without radioactive labeling. C_LI Graphical overview O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=89 SRC="FIGDIR/small/716985v1_ufig1.gif" ALT="Figure 1"> View larger version (24K): org.highwire.dtl.DTLVardef@417649org.highwire.dtl.DTLVardef@1bcd186org.highwire.dtl.DTLVardef@15fecb3org.highwire.dtl.DTLVardef@acdf8d_HPS_FORMAT_FIGEXP M_FIG C_FIG Graphical AbstractPipeline for the production and evaluation of IRES-firefly luciferase constructs using wheat germ extract. (1-4) Preparation: IRES-firefly luciferase constructs are amplified in E. coli and isolated from bacterial cells. Plasmids are linearized to prepare for in vitro transcription. (5-6) Transcript synthesis and verification: In vitro transcription is followed by electrophoretic validation to confirm integrity and correct molecular weight. (7-8) Translation and detection: Translation is executed in wheat germ extract and quantified by measuring reporter activity in a luminometer.
Liu, Y.; Fukai, Y. T.; Cano-Muniz, S.; Perez, V.; Todorov, M.; Ortega, G.; Morello, T.; Loeffler, D.; Paetzold, J.; Xu, X.; Lamm, L.; Ma, N.; Erturk, A.; Schroeder, T.; Boeck, L.; Schapiro, D.; Schaub, N.; Marr, C.; Peng, T.
Show abstract
Quantitative fluorescence microscopy is frequently confounded by spatially varying illumination and temporal intensity drift. Although BaSiC is a widely adopted retrospective correction method, it can fail when foreground content is strongly correlated across images, a common regime in time-lapse, tiled and volumetric acquisitions, and its application often requires manual parameter tuning that limits reproducibility and scalability. We introduce BaSiCPy, a foreground-aware implementation of BaSiC that improves illumination profile estimation under correlated foreground structures, provides automatic hyperparameter selection and accelerates large-scale processing through GPU support. BaSiCPy is distributed as an open-source Python package with graphical and programmatic interfaces, facilitating integration into contemporary bioimage analysis workflows.
Gorin, G.; Goodman, L.
Show abstract
The empty drops in single-cell sequencing experiments are an underexplored resource. As such, they present a substrate to ask questions orthogonal to standard single-cell sequencing workflows, calibrate statistical models using simple internal controls, and detect technical outliers which would be otherwise challenging to distinguish from real biology. In this case study, we report a relatively simple procedure to detect sequencing artifacts and make recommendations to reduce the risk of erroneous quantifications. In addition, we report the surprising abundance and co-expression of mRNA coding for neuropeptide-related genes in the empty drops, possibly reflecting underlying physiology.
Song, A.; LaVergne, A.; Wrobel, B.
Show abstract
Building a high-fidelity computational model of the whole human brain will require preservation of the ultrastructure at the level of the entire organ, post-mortem. For such a model to reflect as closely as possible the brain in the living state, artifacts that arise during both the agonal phase and the postmortem interval will need to be minimized. This is potentially feasible if a terminally-ill patient donates their brain for research following physician-assisted death. In this paper, we modify a protocol for aldehyde-stabilized cryopreservation to make it compatible with physician-assisted death. We use pigs as a model, which resemble humans in cardiovascular and brain anatomy. Aldehyde-stabilized cryopreservation was designed to provide superior structural preservation of brains of any size, across all anatomical scales, compatible with diverse analytical assays and long-term storage without ultrastructural degradation. We demonstrate, with light microscopy and volume electron microscopy, that our brain preservation protocol results in connectomically traceable whole brains and propose an economically feasible storage modality that is expected to maintain stability of ultrastructure and macromolecules in the brain even for thousands of years. Most importantly, we establish that 14 min is the approximate length of the perfusability window--the time after the cardiac arrest during which blood washout needs to be initiated so that the brain ultrastructure is preserved.