Neuroinformatics
○ Springer Science and Business Media LLC
All preprints, ranked by how well they match Neuroinformatics's content profile, based on 40 papers previously published here. The average preprint has a 0.04% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Kleven, H.; Gillespie, T. H.; Zehl, L.; Dickscheid, T.; Bjaalie, J. G.; Martone, M. E.; Leergaard, T. B.
Show abstract
Brain atlases are important reference resources for accurate anatomical description of neuroscience data. Open access, three-dimensional atlases serve as spatial frameworks for integrating experimental data and defining regions-of-interest in analytic workflows. However, naming conventions, parcellation criteria, area definitions, and underlying mapping methodologies differ considerably between atlases and across atlas versions. This lack of standardization impedes use of atlases in analytic tools and registration of data to different atlases. To establish a machine-readable standard for representing brain atlases, we identified four fundamental atlas elements, defined their relations, and created an ontology model. Here we present our Atlas Ontology Model (AtOM) and exemplify its use by applying it to mouse, rat, and human brain atlases. We propose minimum requirements for FAIR atlases and discuss how AtOM may facilitate atlas interoperability and data integration. AtOM provides a standardized framework for communication and use of brain atlases to create, use, and refer to specific atlas elements and versions. We argue that AtOM will accelerate analysis, sharing, and reuse of neuroscience data.
Har-Gil, H.; Jacobson, Y.; Proenneke, A.; Staiger, J. F.; Tomer, O.; Halperin, D.; Blinder, P.
Show abstract
AO_SCPLOWBSTRACTC_SCPLOWThe analysis of neuronal structure and its relation to function has become a fundamental pillar in neuroscience since its earliest days, with the underlying premise that morphological properties can modulate neuronal computations. It is often the case that the rich three-dimensional structure of neurons is quantified by tools developed in other fields, such as graph theory and computational geometry; nevertheless, some of the more advanced tools developed in these fields have not yet been made accessible to the neuroscience community. Here we present Neural Collision Detection, a library providing high-level interfaces to collision-detection routines and alpha shape calculations, as well as statistical analysis and visualizations for 3D objects, with the aim to lower the entry gap for neuroscientists into these worlds. Our work here also demonstrates a variety of use cases for the library and exemplary analysis and visualizations that were carried out with it on real neuronal and vascular data.
Zhao, S.; Qian, P.; Liu, L.
Show abstract
MotivationRecent advances in reconstructing 3D neuron morphologies at the whole brain level offer exciting opportunities to study single cell genotyping and phenotyping. However, it remains challenging to define cell types and sub-types properly. ResultsAs morphological feature spaces are often too complicated to classify neurons, we introduce a method to detect the optimal subspace of features so that neurons can be well clustered. We have applied this method to one of the largest curated databases of morphological reconstructions that contains more than 9,400 mouse neurons of 19 cell types. Our method is able to detect the distinctive feature subspaces for each cell type. Our approach also outperforms prevailing cell typing approaches in terms of its ability to identify key morphological indicators for each neuron type and separate superclasses of these neuron types. the subclasses of neuronal types could supply information for brain connectivity and modeling, also promote other analysis including feature spaces. AvailabilityAll datasets used in this study are publicly available. All analyses were conducted with python package Scikitlearn 0.23.1 version. Source code used for data processing, analysis and figure generation is available as an open-source Python package, on https://github.com/SEU-ALLEN-codebase/ManifoldAnalysis Contactljliu@braintell.org
Shen, J.; Mei, J.; Wallden, M.; Ino, F.
Show abstract
FreeSurfer is among the most widely used suites of software for the study of cortical and subcortical brain anatomy. However, analysis using FreeSurfer can be time-consuming and it lacks support for the graphics processing units (GPUs) after the core development team stopped maintaining GPU-accelerated versions due to significant programming cost. As FreeSurfer is a large project with millions of source lines, in this work, we introduce and examine the use of a directive-based framework, OpenACC, in GPU acceleration of FreeSurfer, and we found the OpenACC-based approach significantly reduces programming costs. Moreover, because the overhead incurred by CPU-to-GPU data transfer is the major challenge in delivering GPU-based codes of high performance, we compare two schemes, copy- and-transfer and overlapped-fully-transfer, to reduce such data transfer overhead. Exper-imental results show that the target function we accelerated with overlapped-fully-transfer scheme ran 2.3 as fast as the original CPU-based function, and the GPU-accelerated program achieved an average speedup of 1.2 compared to the original CPU-based program. These results demonstrate the usefulness and potential of utilizing the proposed OpenACC-based approach to integrate GPU support for FreeSurfer which can be easily extended to other computationally expensive functions and modules of FreeSurfer to achieve further speedup.
Ferreiro, E.; Rodriguez-Iglesias, N.; Cardoso, J.; Valero, J.
Show abstract
Volume estimations are crucial for many neuroscience studies, allowing the evaluation of changes in the size of brain areas that may have relevant functional consequences. Classical histological methods and modern human brain imaging techniques rely on obtaining physical or digital sections, with a known thickness, of the organ to be analyzed. This "slicing" strategy is associated with an ineludible loss of information about the three-dimensional organization of the analyzed structures, especially affecting the precision of volumetric measurements. To overcome this problem, several methods have been developed. One of the most commonly used approaches for volume estimation is the classical Cavalieris method. Within this book chapter, we provide first an overview of Cavalieris method and propose a new one, named the Truncated Cone Shape (TCS) method, for the estimation of volumes from tissue sections. Second, we compare the accuracy of both methods using computer-generated objects of different shapes and sizes. We conclude that, more frequently, the TCS method provides a better estimate of real volumes than Cavalieris method. And third, we describe a protocol to estimate volumes using a self-developed and freely available tool for ImageJ: VolumestJ (https://github.com/Jorvalgl/VolumestJ). This new tool helps to implement both Cavalieris and TCS methods using digital images of tissue sections. We consider that VolumestJ will facilitate the labor of researchers interested in volume estimations.
Subramanian, A.; Lan, H.; Govindarajan, S.; Viswanathan, L.; Choupan, J.; Sepehrband, F.
Show abstract
We present NiftyTorch a Deep Learning Framework for NeuroImaging. The motivation behind the development of such a library is that there are scant amount of centralized tool for deploying 3D deep learning for NeuroImaging. In addition, most of the existing tools require expert technical knowledge in Deep Learning or programming, creating a barrier for entry. The goal is to provide a one stop package using which the users can perform classification tasks, Segmentation tasks and Image Transformation tasks. The intended audience are the members of NeuroImaging who would like to explore deep learning but have no background in coding. In this article we explore the capabilities of the framework, the performance of the framework and the future work for the framework.
Myers, P. E.; Arvapalli, G. C.; Ramachandran, S. C.; Pisner, D. A.; Frank, P. F.; Lemmer, A. D.; Bridgeford, E. W.; Nikolaidis, A.; Vogelstein, J. T.
Show abstract
Using brain atlases to localize regions of interest is a requirement for making neuroscientifically valid statistical inferences. These atlases, represented in volumetric or surface coordinate spaces, can describe brain topology from a variety of perspectives. Although many human brain atlases have circulated the field over the past fifty years, limited effort has been devoted to their standardization. Standardization can facilitate consistency and transparency with respect to orientation, resolution, labeling scheme, file storage format, and coordinate space designation. Our group has worked to consolidate an extensive selection of popular human brain atlases into a single, curated, open-source library, where they are stored following a standardized protocol with accompanying metadata, which can serve as the basis for future atlases. The repository containing the atlases, the specification, as well as relevant transformation functions is available at https://github.com/neurodata/neuroparc.
Borst, A.; Denk, W.
Show abstract
Volume electron microscopy together with computer-based image analysis are yielding neural circuit diagrams of ever larger regions of the brain [1-10]. These datasets are usually represented in a cell-to-cell connectivity matrix and contain important information about prevalent circuit motifs allowing to directly test various theories on the computation in that brain structure [11,12]. Of particular interest are the detection of cell assemblies and the quantification of feedback, which can profoundly change circuit properties. While the ordering of cells along the rows and columns doesnt change the connectivity, it can make special connectivity patterns recognizable. For example, ordering the cells along the flow of information, feedback and feedforward connections are segregated above and below the main matrix diagonal, respectively. Different algorithms are used to renumber matrices such as to minimize a given cost function, but either their performance becomes unsatisfying at a given size of the circuit or the CPU time needed to compute them scales in an unfavorable way with increasing number of neurons [13-15]. Based on previous ideas [16-18], we describe an algorithm which is effective in matrix reordering with respect to both its performance as well as to its scaling in computing time. Rather than trying to reorder the matrix in discrete steps, the algorithm transiently assigns a real-valued parameter to each cell describing its location on a continuous axis ( smooth-index) and finds the parameter set that minimizes the cost. We find that the smooth-index algorithm outperforms all algorithms we compared it to, including those based on topological sorting. Author SummaryConnectomic data provide researchers with neural circuit diagrams of ever larger regions of the brain. These datasets are usually represented in a cell-to-cell connectivity matrix and contain important information about prevalent circuit motifs. Such motifs, however, only become visible if the connectivity matrix is reordered appropriately. For example, ordering the cells along the flow of information, feedback and feedforward connections are segregated above and below the main matrix diagonal, respectively. While most previous approaches rely on topological sorting, our method treats the discrete vertex indices as real numbers ( smooth-index) along independent parameter axes and defines a differentiable cost function, thus, allowing gradient-based algorithms to find a minimum. The parameter set at this minimum is then re-discretized to reorder the connectivity matrix accordingly. We find our method to scale favorably with the circuit size and to outperform all algorithms we compared it to.
Claudi, F.; Tyson, A. L.; Branco, T.
Show abstract
The recent development of high-resolution three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques has fueled the generation of large datasets that can be registered to a common reference frame. This registration facilitates integrating data from different sources and resolutions to assemble rich multidimensional datasets. Generating insights from these new types of datasets depends critically on the ability to easily visualize and explore the data in an interactive manner. This is, however, a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. To address this challenge, we have developed brainrender: a generic, open-source Python package for simultaneous and interactive visualization of multidimensional datasets registered to brain atlases. Brainrender has been designed to facilitate the creation of complex custom renderings and can be used programmatically or through a graphical user interface. It can easily render different data types in the same visualization, including user-generated data, and enables seamless use of different brain atlases using the same code base. In addition, brainrender generates high-quality visualizations that can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data.
Turner, M. A.; Chartrand, T.; Summers, M. T.; Hooper, M.; van Velthoven, C.; Waters, J.; de Vries, S.; Zeng, H.; Tasic, B.; Svoboda, K.; Long, B.
Show abstract
The thalamus connects the sensory organs and major subcortical brain regions with the neocortex. The thalamus has long been divided into multiple discrete nuclei, based on cytoarchitecture, histochemical stains, and mesoscale connectivity. However, thalamic nuclei do not completely describe thalamic organization. For example, some boundaries between thalamic nuclei are disputed, whereas other nuclei are known to contain subdomains with distinct connectivity and function. Moreover, the correspondence between cellular gene expression and other properties of thalamic projection neurons remains to be established. Spatial analysis of single cell gene expression provides a basis for reevaluating thalamic organization. We present the THALMANAC, (THALamusMERFISH ANalysis and ACcess) a Findable, Accessible, Interoperable, Reusable and Reproducible (FAIRR) resource for exploring and analyzing single-cell transcriptomic variation in the thalamus. The THALMANAC provides streamlined access to thalamic gene expression data registered to the common coordinate framework and tools for quantitative analysis and visualization of these data, all encapsulated in a reproducible, cloud computing platform. Using this resource, we find that gene expression generally supports the parcellation of thalamus into distinct nuclei. Some nuclei, such as the anteromedial nucleus, are additionally composed of discrete subdomains, while other nuclei share patterns of gene expression or are arrayed on a spatial gradient of gene expression. The THALMANAC establishes spatial transcriptomic data as a foundation for delineating thalamic organization. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=124 SRC="FIGDIR/small/679413v1_ufig1.gif" ALT="Figure 1"> View larger version (49K): org.highwire.dtl.DTLVardef@1d517fcorg.highwire.dtl.DTLVardef@119dccdorg.highwire.dtl.DTLVardef@ef2cd6org.highwire.dtl.DTLVardef@68adac_HPS_FORMAT_FIGEXP M_FIG C_FIG
Meyers, E. M.
Show abstract
Neural decoding is a powerful method to analyze neural activity. However, the code needed to run a decoding analysis can be complex, which can present a barrier to using the method. In this paper we introduce a package that makes it easy to perform decoding analyses in the R programing language. We describe how the package is designed in a modular fashion which allows researchers to easily implement a range of different analyses. We also discuss how to format data to be able to use the package, and we give two examples of how to use the package to analyze real data. We believe that this package, combined with the rich data analysis ecosystem in R, will make it significantly easier for researchers to create reproducible decoding analyses, which should help increase the pace of neuroscience discoveries.
Kantor, B.; Ben-Ami Bartal, I.
Show abstract
A central current trend in neuroscience involves the identification of brain-wide neural circuits associated with complex behavior. A major challenge for this approach involves the laborious process for registration and quantification of fluorescence on histological brain slices, as well as the difficulty of deriving functional insight from the complex resulting datasets. As a solution, we developed Brainways, a simple-to-use AI-based open-source software for the identification of neural networks involved in a specific behavior, from digital images to network analysis. Brainways offers automatic registration of coronal slices to any 3D brain atlas, and provides quantification of fluorescent markers (e.g. activity marker, tracer) per region, as well as statistical comparisons with visual mapping of contrasts between conditions. A built-in partial least squares task analysis provides the neural patterns associated with a specific contrast, as well as network graph analysis representing functional connectivity. Trained on atlases for rats and mice, Brainways currently provides above 80% atlas registration accuracy and allows the user to easily adjust the outputs for better fit. Below, a case study validation of Brainways is demonstrated on a previously published data set describing the neural correlates of empathic helping behavior in rats. The original results were successfully replicated and expanded upon, due to the exponentially larger sample size that covered over a 100 times more brain tissue compared to the original manual sampling. Brainways thus provides a fast, accurate solution for quantification of large-scale projects and facilitates novel neurobiological insights about the structural and functional neural networks involved in complex behavior. Brainways has a highly accessible GUI and is functionality exposed through a Python-based API, which can be enhanced for different applications.
Dang, T.; Fermin, A. S. R.; Machizawa, M. G.
Show abstract
Neuroimaging data is complex and high-dimensional that poses challenges for machine learning (ML) applications. Of varieties of reasons contributing on accuracy decoding, variable feature selection is one of crucial steps for determining target feature in data analysis, especially in the context of neuroimaging studies where the number of features is often much larger than the number of observations. Therefore, optimization of feature selection from such high-dimensional neuroimaging data has been challenging using conventional ML algorithms. Here, we introduce an efficient ML package incorporating a forward variable selection (FVS) algorithm that optimizes the identification of features for both classification and regression models. In our framework, the best ML model and feature pairs that explain the inputs can be automatically determined. Moreover, the toolbox can be executed in a parallel environment for efficient computation. The parallelized FVS algorithm iteratively selects the best feature pair compared against the previous steps to maximize the predictive performance. The FVS algorithm evaluates the goodness-of-fit across different models using the k-fold cross validation and identifies the best subset of features based on a pre-defined criterion for each model. Furthermore, the hyperparameters of each ML model are optimized at each forward iteration. A final outcome highlights an optimized number of selected features (brain regions of interest) with decoding accuracies. Using our pipeline, we examined the effectiveness of our toolbox on an existing neuroimaging (structural MRI) dataset. Compared ML models with and without FVS approach, we demonstrate that the FVS significantly improved the accuracy of the ML algorithm over the counterpart model without FVS. Furthermore, we confirmed the use of parallel computation considerably reduced the computational burden for the high-dimensional MRI data. This oFVSD toolbox efficiently and effectively improves the performance of both classification and regression models on neuroimaging data and should be applicable to many other neuroimaging data and more. This Python package is open-source and freely available, making it a useful toolbox for neuroimaging communities seeking improvement of decoding accuracy for their datasets.
Raikov, I. G.; Milstein, A. D.; Moolchand, P.; Szabo, G. G.; Schneider, C. J.; Hadjiabadi, D. H.; Chatzikalymniou, A. P.; Soltesz, I.
Show abstract
Large-scale computational models of the brain are necessary to accurately represent anatomical and functional variability in neuronal biophysics across brain regions and also to capture and study local and global interactions between neuronal populations on a behaviorally-relevant temporal scale. We present the methodology behind and an initial implementation of a novel open-source computational framework for construction, simulation, and analysis of models consisting of millions of neurons on high-performance computing systems, based on the NEURON and CoreNEURON simulators (Carnevale and Hines, 2006, Kumbhar et al., 2019). This framework uses the HDF5 data format and software library (HDF Group, 2021) and includes a data format for storing morphological, synaptic, and connectivity information of large neuronal network models, and an accompanying open-source software library that provides efficient, scalable parallel storage and MPI-based data movement capabilities. We outline our approaches for constructing detailed large-scale biophysical models with topographical connectivity and input stimuli, and present simulation results obtained with a full-scale model of the dentate gyrus constructed with our framework. The model generates sparse and spatially selective population activity that fits well with in-vivo experimental data. Moreover, our approach is fully general and can be applied to modeling other regions of the hippocampal formation in order to rapidly evaluate specific hypotheses about large-scale neural architectural features.
Petersen, P. C.; Buzsaki, G.
Show abstract
The large diversity of neuron types of the brain, characterized by a unique set of electrophysiological characteristics, provides the means by which cortical circuits perform complex operations. To quantify, compare, and visualize the functional features of single neurons, we have developed the open-source framework, CellExplorer. It consists of three components: a processing module that calculates standardized physiological metrics, performs neuron type classification and detects putative monosynaptic connections, a flexible data structure, and a powerful graphical interface. The graphical interface makes it possible to explore any combination of pre-computed features at the speed of a mouse click. The CellExplorer framework allows users to process and relate their data to a growing collection of "ground truth" neurons from different genetic lines, as well as to tens of thousands of single neurons collected across our labs. We believe CellExplorer will accelerate the linking of physiological properties of single neurons in the intact brain to genetically identified types.
Liu, Z.-Q.; Bazinet, V.; Hansen, J. Y.; Milisav, F.; Luppi, A. I.; Ceballos, E. G.; Farahani, A.; Suarez, L. E.; Shafiei, G.; Markello, R. D.; Misic, B.
Show abstract
Brain imaging is an increasingly inter-disciplinary field, encompassing multiple data types and multiple analytic traditions. Projects typically involve many moving parts, such as building customized preprocessing pipelines, transforming between data formats, preparing datasets for analysis, and ultimately displaying results. The field is conventionally built on highly specialized software packages that solve these individual challenges well, but are not necessarily designed to be interoperable. Trainees new to the field are therefore often left to come up with isolated heuristics and workarounds to complete a project. Here we present a way to navigate the increasingly complex informatics ecosystem of brain imaging. netneurotools is our labs internal Python toolkit that has been continuously developed and maintained by the labs trainees. The philosophy of the toolkit is that it should be the Swiss army knife of the lab: functions and routines that we often use but that are not part of any established pipeline or package. Since its inception, the toolkit has been open and welcomes contribution from neuroscientists across the globe. netneurotools presents a necessary counterweight to out-of-the-box software packages and highlights the importance of smaller, ad hoc functions for implementing projects. By opening a window into the inner workings of a lab, netneurotools also presents an opportunity to begin a new type of discourse among groups and establish tangible links within the community.
Gerkin, R. C.; Birgiolas, J.; Jarvis, R. J.; Omar, C.; Crook, S. M.
Show abstract
Validating a quantitative scientific model requires comparing its predictions against many experimental observations, ideally from many labs, using transparent, robust, statistical comparisons. Unfortunately, in rapidly-growing fields like neuroscience, this is becoming increasingly untenable, even for the most conscientious scientists. Thus the merits and limitations of existing models, or whether a new model is an improvement on the state-of-the-art, is often unclear.\n\nSoftware engineers seeking to verify, validate and contribute to a complex software project rely on suites of simple executable tests, called \"unit tests\". Drawing inspiration from this practice, we previously developed SciUnit, an easy-to-use framework for developing data-driven \"model validation tests\" - executable functions, here written in Python. Each such test generates and statistically validates predictions from a model against one relevant feature of empirical data to produce a score indicating agreement between the model and the data. Suites of such validation tests can be used to clearly identify the merits and limitations of existing models and developmental progress on new models.\n\nHere we describe NeuronUnit, a library that builds upon SciUnit and integrates with several existing neuroinformatics resources to support the validation of single-neuron models using data gathered by neurophysiologists and neuroanatomists. NeuronUnit integrates with existing technologies like Jupyter, Pandas, NeuroML and resources such as NeuroElectro, The Allen Institute, and The Human Brain Project in order to make neuron model validation as easy as possible for computational neuroscientists.
Thibieroz, N.; Cordelieres, F.; Lopes Cardoso Filho, J.-C.; Machillot, P.; Marchadier, L.; Singh, A.; Picart, C.; Migliorini, E.
Show abstract
Measuring neurite length is crucial in neurobiology because it provides valuable insights into the growth, development, and function of neurons. In particular, neurite length is fundamental to study neuronal development and differentiation, neurons responses to drugs, neurodegenerative diseases and neuronal plasticity. Surprisingly, there is currently a lack of tools for high content neurite analysis. In this article, we present CABaNe, as an open source, high content, rule based Image J macro for cell analysis, including their neurite length. This macro possesses a graphical interface, metadata production, as well as verification means before and after analysis. Rule based and machine learning based programming have been tested for cell identification. After testing, we had better precision and adaptability using rule based cell identification. We challenged CABaNe with currently used techniques, which are manual or assisted. When tested on a small sample, CABaNe demonstrated a massive speed increase in capacity to treat dataset while maintaining or increasing precision when compared to manual measurement. When tested on a large data set, comparing different conditions, we successfully highlighted differences between conditions, in a fully automated manner. Therefore, CABaNe is viable as a high content option for cell analysis, for neurite length and other parameters. It is a base of code that can be used for other analysis or to train deep learning models. In the future, we expect this tool to be widely used in both basic and applied neurobiology research. Significance statementWhen studying neuronal cell differentiation, an important morphological parameter is neurite length. This parameter requires measuring the protrusions length of analysed cells. However, this analysis done manually can be long, as each individual cell must be measured independently. Currently, efficient single cell tools exist to assist the measurement, such as NeuronJ. However, there is currently no available automated tool for this analysis, and manual techniques suffer operator bias. In this paper, we present a macro to fully automatize neurite length and other parameters measurement, for each cell, in each image, in each condition.
Zulaica, N. B.; Kanari, L.; Sood, V.; Rai, P.; Arnaudon, A.; Shi, Y.; Mange, D.; Van Geit, W.; Zbili, M.; Reva, M.; Boci, E.; Perin, R.; Pezzoli, M.; Benavides-Piccione, R.; DeFelipe, J.; Mertens, E.; de Kock, C. P. J.; Segev, I.; Markram, H.; Reimann, M. W.
Show abstract
The neocortex underlies cognitive abilities that set humans apart from other species. Although Ramon y Cajal initiated its study in the 19th century, much about its fundamental properties remain poorly understood. Biologically detail modeling, has been shown to serve as a tool to understand the modeled system better. By comparing computational models for different species we can highlight functional differences between them, find their anatomical or physiological basis and thus improve our understanding of cortical function. In this study we built a detailed model of a human cortical microcircuit following an established workflow. We compared the human data and results against a previously published reconstruction of rat cortical circuitry. To parametrize the human model, we gathered new original data on human morphological reconstructions, axonal bouton densities, single cell and synaptic recordings. We combined them with data available in the literature and open-sourced databases. We also developed various strategies to overcome the missing data, such as generalizing or adapting data from rodents. The resulting model consists of seven columnar units with similar characteristics. Each column has a radius of 476 {micro}m, a height of 2622 {micro}m, a volume of 1.86 mm3, a total cell density of 24,186 cells/mm3, on the order of 35,000 cells, around 12 million connections and approximately 47 million synapses. Comparing the rat and the human model showed that the human cortex is less dense in terms of cell bodies than the rodent cortex. Human cells have more complex branching, but lower bouton densities than rodent cells. However, the number of connections between cell types is similar.
Lerma-Usabiaga, G.; Perry, M.; Wandell, B. A.
Show abstract
Reproducible Tract Profiles (RTP) comprises a set of methods to manage and analyze diffusion weighted imaging (DWI) data for reproducible tractography. The tools take MRI data from the scanner and process them through a series of analysis implemented as Docker containers that are integrated into a modern neuroinformatics platform (Flywheel). The platform guarantees that the entire pipeline can be re-executed, using the same data and computational parameters. In this paper, we describe (1) a cloud based neuroinformatics platform, (2) a tool to programmatically access and control the platform from a client, and (3) the DWI analysis tools that are used to identify the positions of 22 tracts and their diffusion profiles. The combination of these three components defines a system that transforms raw data into reproducible tract profiles for publication.\n\nGraphical abstractReproducible Tract Profiles (RTP) comprises a set of methods to manage and analyze diffusion weighted imaging (DWI) data for reproducible tractography. The RTP methods comprise two main parts. O_LIServer side software tools for storing data and metadata and managing containerized computations.\nC_LIO_LIClient side software tools that enable the researcher to read data and metadata and manage server-side computations.\nC_LI\n\nThe server-side computational tools are embedded in containers that are linked to a JSON file with a complete specification of the computational parameters. The data and computational infrastructure on the server is fully reproducible.