Neuroinformatics
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Neuroinformatics's content profile, based on 40 papers previously published here. The average preprint has a 0.04% match score for this journal, so anything above that is already an above-average fit.
Zulaica, N. B.; Kanari, L.; Sood, V.; Rai, P.; Arnaudon, A.; Shi, Y.; Mange, D.; Van Geit, W.; Zbili, M.; Reva, M.; Boci, E.; Perin, R.; Pezzoli, M.; Benavides-Piccione, R.; DeFelipe, J.; Mertens, E.; de Kock, C. P. J.; Segev, I.; Markram, H.; Reimann, M. W.
Show abstract
The neocortex underlies cognitive abilities that set humans apart from other species. Although Ramon y Cajal initiated its study in the 19th century, much about its fundamental properties remain poorly understood. Biologically detail modeling, has been shown to serve as a tool to understand the modeled system better. By comparing computational models for different species we can highlight functional differences between them, find their anatomical or physiological basis and thus improve our understanding of cortical function. In this study we built a detailed model of a human cortical microcircuit following an established workflow. We compared the human data and results against a previously published reconstruction of rat cortical circuitry. To parametrize the human model, we gathered new original data on human morphological reconstructions, axonal bouton densities, single cell and synaptic recordings. We combined them with data available in the literature and open-sourced databases. We also developed various strategies to overcome the missing data, such as generalizing or adapting data from rodents. The resulting model consists of seven columnar units with similar characteristics. Each column has a radius of 476 {micro}m, a height of 2622 {micro}m, a volume of 1.86 mm3, a total cell density of 24,186 cells/mm3, on the order of 35,000 cells, around 12 million connections and approximately 47 million synapses. Comparing the rat and the human model showed that the human cortex is less dense in terms of cell bodies than the rodent cortex. Human cells have more complex branching, but lower bouton densities than rodent cells. However, the number of connections between cell types is similar.
Emissah, H. A.; Tecuatl, C.; Ascoli, G. A.
Show abstract
Background: The rapid expansion of large-scale neuroscience datasets has increased the need for automated, accurate, and standardized quality control (QC). Manual proofreading of 3-dimensional neural morphology (SWC files) remains labor-intensive, error-prone, and non-scalable. We developed and evaluated a fully automated, machine-learning driven QC pipeline to standardize neural reconstructions, detect and correct structural anomalies, and rectify dendritic labeling in pyramidal neurons. Methods: We developed an end-to-end, cloud-deployed pipeline for automated QC, correction, and standardization of SWC-formatted neural morphologies. The framework integrates deterministic structural normalization, topology repair, geometric correction, quantitative morphometric analysis, and graph-based dendritic relabeling within a containerized React/Flask architecture deployed on Amazon Web Services. Rule-based algorithms systematically detect, classify, and correct structural irregularities including overlapping nodes, spurious side branches, non-positive radii, disconnected components, and anomalously long parent-child connections. A graph convolutional network, trained on Sholl-derived features from 20,500 pyramidal neurons, performs dendritic relabeling. Model training employed an 80/10/10 train-validation-test split with adaptive learning-rate scheduling and distributed execution across ten runs to evaluate stability and reproducibility. The pipeline generates images of the final product and computes quantitative morphometrics using L-Measure. Results: All neuronal reconstructions were processed without manual intervention. Automated normalization and topology repair restored structurally coherent and biologically accurate morphologies suitable for quantitative analysis and visualization without data loss. Dendritic relabeling achieved a mean accuracy of 99.51%, consistent between validation and test sets, with class-weighted precision of 0.978, recall of 0.977, and F1-score of 0.977. Enforcing a single apical dendritic tree per neuron improved anatomical consistency without reducing classification performance. Distributed training completed all runs in approximately 25 hours, demonstrating scalability and reproducibility for large datasets. Conclusions: We present a fully automated and cloud-scalable open-source pipeline for standardizing neural reconstructions and performing biologically consistent dendritic classification with near-perfect accuracy. The automated correction and relabeling procedures do not alter or compromise the size or unaffected morphological detail of the original SWC files, ensuring geometric fidelity and compatibility with downstream analysis tools. This open-access framework provides a robust foundation for high-throughput neural morphology curation and large-scale neuroanatomical analysis.
Volcko, K. L.; McCutcheon, J. E.
Show abstract
Lick microstructure is a term used in behavioural neuroscience to describe the information that can be obtained from a detailed examination of rodent drinking behaviour. Rather than simply recording total intake (volume consumed), lick microstructure examines how licks are grouped, and the spacing of these groups of licks. This type of analysis can provide important insights into why an animal is drinking, for example, whether it is influenced by taste or affected by consequences of consumption (e.g., feeling "full"). Here we present a software package, lickcalc, that allows detailed microstructural analysis of licking patterns. The software is browser-based and is hosted at https://lickcalc.uit.no or the repository can be downloaded and installed locally. Lick timestamps can be loaded from a variety of formats and different analysis and plotting options allow quality control of data and determining critical parameters for microstructural analysis number and size of lick bursts. Data can be divided into epochs for detailed examination of changes across session. Batch processing and custom configurations are supported. In this manuscript, we demonstrate use of the functions exposed by lickcalc by analysing data comparing lick patterns between mice on a protein-restricted and control (non-restricted diet). We show that lickcalc allows quality control of the data and uncovering of subtle differences in lick behaviour that are not apparent when just considering the total number of licks. This software makes microstructural analysis accessible to any researchers who wish to employ it while providing sophisticated analyses with high scientific value.
Slack, J. C.; Rutledge, G.; Yadav, A. P.
Show abstract
Online processing and visualization of large-scale neural data is critical for neuroscientific discovery and advancements in neural engineering. However, with the development of technologies like Neuropixels (NP) probes, which enable simultaneous streaming from hundreds of recording electrodes, handling such data in real-time has become an ongoing challenge. Moreover, keeping pace with recording hardware has required most existing software, such as SpikeGLX for NP probes, to prioritize acquisition stability, leaving data processing and visualization to primarily be performed offline. Thus, we created OP-GLX, a MATLAB-based toolbox designed to operate in tandem with SpikeGLX to enhance the fetching, processing, and visualization of incoming neural data. The OP-GLX toolbox features several processing capabilities, including spike detection, computing time-binned firing rates, plotting spike waveforms, and conducting principal component analysis (PCA). The processed neural data is displayed on a native graphical user interface (GUI) for intuitive and customizable interaction with the experiment. The performance testing of OP-GLX showed that it supports real-time operation, confirmed by the absence of SpikeGLX stream buffer fetch errors across multiple acquisition settings. By complementing current neural data acquisition methods and providing stable online functionality, we envision that OP-GLX will enable researchers to visualize and interpret their data more effectively during ongoing neuroscience experiments.
De Matola, M.; Arcara, G.
Show abstract
Convolutional neural networks (CNNs) are a class of artificial neural networks (ANNs). Since the early 2010s, they have been widely adopted as models of primate vision and classifiers of neuroimaging data, becoming relevant for a wealth of neuroscientific fields. However, the majority of neuroscience researchers come from soft-science backgrounds (like medicine, biology, or psychology) and do not have enough quantitative skills to understand the inner workings of A/CNNs. To avoid undesirable black boxes, neuroscientists should acquire some rudiments of computational neuroscience and machine learning. However, most researchers do not have the time nor the resources to make big learning investments, and self-study materials are hardly tailored to people with little mathematical background. This paper aims to fill this gap by providing a concise but accurate introduction to CNNs and their use in neuroscience -- using the minimum required mathematics, neuroscientific analogies, and Python code examples. A companion Jupyter Notebook guides readers through code examples, translating theory into practice and providing visual outputs. The paper is organised in three sections: The Concepts, The Implementation, and The Biological Plausibility of A/CNNs. The three sections are largely independent, so readers can either go through the entire paper or select a section of interest.
Chakladar, S.; Pan, S.; Limbrick, O.; Pandey, M.; Halupnik, G. L.; Zhao, A.; Mahjoub, M. R.; Quirk, J. D.; Nazeri, A.; Strahle, J. M.
Show abstract
IntroductionCurrent workflows for studying hydrocephalus in rodent models rely on manual segmentation or qualitative assessment of ventricular size on small animal magnetic resonance imaging, which are both inefficient and prone to variability. Atlas-based methods enable more streamlined segmentation, but their analysis is limited to morphologically normal samples. ObjectiveThis study aimed to develop and internally validate a deep learning model that performs automated segmentation of lateral ventricles in rodent brain MRIs, allowing for 3D ventricle reconstruction, morphological analysis, and ventriculomegaly detection. MethodsFour U-Net++ neural networks, each with different encoder backbones, were trained using 307 rodent brain MRIs (262 rats, 45 mice), each with manually segmented lateral ventricles serving as the ground truth. Model performance was evaluated using the Dice coefficient, intersection over union (IoU), and Hausdorff index. The most optimal model was evaluated further for its ability to quantify ventricle volume, convexity, surface area, and symmetry. ResultsThe U-Net++ model with an EfficientNet-B1 encoder achieved high accuracy (Dice: 0.823 {+/-} 0.136; IoU: 0.721 {+/-} 0.85). Further assessment of its morphological predictions found strong correlations with manual measurements of ventricular morphology, with Pearson and interclass correlation coefficients exceeding 0.96 across all metrics. The full validated pipeline was packaged into a publicly available application, hosted at https://ava-tar.org. ConclusionThis study introduces a deep learning tool for automated segmentation and morphological analysis of lateral ventricles in rodent MRIs. The tools efficiency and accuracy in quantifying ventricle morphology offers significant utility in preclinical hydrocephalus research with potential future application in the clinical setting.
Tar, L.; Saray, S.; Mohacsi, M.; Freund, T. F.; Kali, S.
Show abstract
Anatomically and biophysically detailed models of neurons have been widely used to study information processing in these cells. Most studies focused on understanding specific phenomena, while more general models that aim to capture various cellular processes simultaneously remain rare even though such models are required to predict neuronal behavior under more complex, natural conditions. In this study, we aimed to develop a detailed, data-driven, general-purpose biophysical model of hippocampal CA1 pyramidal neurons. We leveraged extensive morphological, biophysical and physiological data available for this cell type, and established a systematic workflow for model construction and validation that relies on our recently developed software tools. The model is based on a high-quality morphological reconstruction and includes a diverse curated set of ion channel models. After incorporating the available constraints on the distribution of ion channels, the remaining free parameters were optimized using the Neuroptimus tool to fit a variety of electrophysiological features extracted from somatic whole-cell recordings. Validation using HippoUnit confirmed the models ability to replicate key electrophysiological features, including somatic voltage responses to current input, the attenuation of synaptic potentials and backpropagating action potentials, and nonlinear synaptic integration in oblique dendrites. Our model also included active dendritic spines, modeled either explicitly or by merging their biophysical mechanisms into those of the parent dendrite. We found that many aspects of neuronal behavior were unaffected by the level of detail in modeling spines, but modeling nonlinear synaptic integration accurately required the explicit modeling of spines. Our data-driven model of CA1 pyramidal cells matching diverse experimental constraints is a general tool for the investigation of the activity and plasticity of these cells and can also be a reliable component of detailed models of the hippocampal network. Our systematic approach to building and validating general-purpose models should apply to other cell types as well. Author SummaryThe brain processes information through the activity of billions of individual neurons. To understand how these cells work, scientists build detailed computer models that reproduce their electrical behavior. These models make it possible to explore situations that are difficult or impossible to test experimentally. However, many existing neuron models were designed to explain only a few specific phenomena, which limits their usefulness in more complex settings. In this study, we developed a comprehensive computer model of a hippocampal CA1 pyramidal neuron, a cell type that plays a central role in learning and memory. We built the model using extensive experimental data and applied automated methods to ensure that it reproduces a broad range of observed neuronal behaviors. We also examined how small structures called dendritic spines--tiny protrusions where most synaptic communication occurs--affect how neurons combine incoming signals. We found that even simplified models without individual spines can capture many aspects of neuronal activity, but understanding more complex forms of signal integration requires modeling spines explicitly. Our work also supports the development of more realistic simulations of brain circuits.
Thomas-Hegarty, J.; Pulver, S. R.; Smith, V. A.
Show abstract
Neural information flow describes the movement of activity between neurons or brain areas. Advances in experimental methods have allowed production of large amounts of observational data related to neuronal activity from the single-neuron to population level. Most current methods for analysing these data are based on pairwise comparison of activity, and fall short of reliably extracting neural information flow network structure. Dynamic Bayesian networks may overcome some of these limitations. Here we evaluate the performance of a range of Bayesian network scoring metrics against the performance of multivariate Granger causality and LASSO regression for their ability to learn the connectivity underlying simulated single-neuron and neuronal population data. We find that discrete dynamic Bayesian networks are the best performing method for single-neuron data, and perform consistently for neural-population data. Continuous dynamic Bayesian networks have a tenancy to learn overly dense structures for both data types, but may have utility in scoping studies on single-neuron data. Multivariate Granger causality is the most robust method for learning structure of neural information flow between neural-populations, but performs poorly on single-neuron data. Significance testing within multivariate Granger causality produces variable results between data types. Overall, this work highlights how the analysis of neural information flow can vary depending on they type and structure of underlying data, and promotes discrete dynamic Bayesian networks as a useful and consistent tool for neural information flow analysis.
Tor, A.; Wu, Y.; Clarke, S. E.; Yamada, L.; Weissman, T.; Nuyujukian, P.; Brain Interfacing Laboratory,
Show abstract
1ObjectiveThe complexity of neural data changes as the brain processes information during events. Universal lossless compression algorithms, which are broadly applicable and grounded in information theory, identify and exploit redundancies in data in order to compress it to essentially-optimal sizes regardless of underlying statistics. These algorithms may be used to conveniently and efficiently estimate a given signals Shannon entropy rate, a biologically relevant measure of the complexity of a signal. It is therefore natural to explore their effectiveness in the analysis of spiking neural data. ApproachThis work focuses on using compression to analyze recordings (96-channel Utah arrays) taken from motor cortex of animals performing reaching tasks for three days before and three days after administering electrolytic lesions (Subject U: 4 lesions, H: 3). In particular, we use the inverse compression ratio (ICR), which compares the sizes of compressed and uncompressed data to estimate the amount of statistically unique information. We calculate ICR with temporally-independent lossless compression (gzip) and temporally-dependent lossy compression (H.264, MPEG-2). Compression-based ICR was compared to single-neuron measures used to understand spiking data, such as average firing rates and Fano factor. Compression is also compared to common dimensionality reduction techniques, principal component analysis (PCA) and factor analysis (FA). Main ResultsStatistical tests on aggregate data comparing each metric before and after lesioning reveal that ICR is able to significantly (Mann-Whitney U test, p < 0.01) detect lesions with higher accuracy than single-neuron metrics, but not dimensionality reduction (ICR methods: 85.7%, single-neuron methods: 78.6%, dimensionality reduction: 100%). Additionally, statistical results on the same data show that ICR metrics remain more stable than single-neuron methods after lesion. The bitrate parameter of lossy compression algorithms is swept to better understand the effect of information rates and "optimal" compression on lesion detection performance. Our conclusions are confirmed by the same analyses performed on several different simulated neural datasets. SignificanceThese results suggest that compression algorithms may be a useful tool to detect and better understand perturbations to the underlying structure of neural data. Information-theoretic analyses may complement techniques like dimensionality reduction and firing rate tuning as a convenient and useful tool to characterize neural data.
Zegers Delgado, J. A.; Renegar, N.; Pathirage, K.; Horiuchi, T. K.; Abshire, P. A.; Araneda, R. C.
Show abstract
BackgroundHigh density microelectrode arrays provide a strong platform to study individual neuronal activity and neuronal network dynamics. However, the analysis of high volume and complex data present several challenges. Common spike detection methods based on Root-Mean-Square (RMS) threshold crossing underestimate the number of spikes during neuronal bursting, which frequently occurs in neuronal cultures. In addition, the detection of action potentials by multiple electrodes makes spikes sorting a computationally expensive task. New MethodWe optimized a previously described detection method, based on the scaled median of absolute deviations (MED) that is more accurate during high rates of neuronal firing. In addition, we added a step to de-duplicate (DP) spikes recorded on multiple electrodes, which enhanced the accuracy of MED. The combined method of detection and de-duplication (DP-MED) is less computationally expensive and easier to implement than popular sorting alternatives like Kilosort-4. Results and ConclusionsDuring burst periods, the MED-based method detected over half of spikes that were undetected by the RMS-based method. To evaluate the performance of DP-MED, we simulated data that emulates neuronal activity recorded with HD-MEA. Across increasing firing rates, DP-MED shows more precision than Kilosort-4 but is slightly less accurate. After inducing high firing rate in cortical cultures with pharmacological stimulation, DP-MED detected a similar number of spikes than Kilosort-4, however, the analysis in Kilosort-4 was 40-fold more time-consuming. These results highlight the effectiveness of the DP-MED method in the context of drug screening using HD-MEAs.
Roca, M.; Messuti, G.; Klepachevskyi, D.; Angiolelli, M.; Bonavita, S.; Trojsi, F.; Demuru, M.; Troisi Lopez, E.; Chevallier, S.; Yger, F.; Saudargiene, A.; Sorrentino, P.; Corsi, M.-C.
Show abstract
Neurodegenerative diseases such as Mild Cognitive Impairment (MCI), Multiple Sclerosis (MS), Parkinson s Disease (PD), and Amyotrophic Lateral Sclerosis (ALS) are becoming more prevalent. Each of these diseases, despite its specific pathophysiological mechanisms, leads to widespread reorganization of brain activity. However, the corresponding neurophysiological signatures of these changes have been elusive. As a consequence, to date, it is not possible to effectively distinguish these diseases from neurophysiological data alone. This work uses Magnetoencephalography (MEG) resting-state data, combined with interpretable machine learning techniques, to support differential diagnosis. We expand on previous work and design a Riemannian geometry-based classification pipeline. The pipeline is fed with typical connectivity metrics, such as covariance or correlation matrices. To maintain interpretability while reducing feature dimensionality, we introduce a classifier-independent feature selection procedure that uses effect sizes derived from the Kruskal-Wallis test. The ensemble classification pipeline, called REDDI, achieved a mean balanced accuracy of 0.81 (+/-0.04) across five folds, representing a 13% improvement over the state-of-the-art, while remaining clinically transparent. As such, our approach achieves reliable, interpretable, data-driven, operator-independent decision-support tools in Neurology.
Nathan, V.; Tullo, S.; Herrera-Portillo, L.; Devenyi, G.; Yee, Y.; Chakravarty, M. M.
Show abstract
The Allen Mouse Brain Connectivity Atlas (AMBCA) is widely used to represent structural connectivity in the mouse brain. The AMBCA consists of tracer injection experiments where neuronal projections axonally connected to the initial injection site are labelled. The resulting whole-brain structural connectomes, derived from a subset of these experiments in C57BL/6 mice, have been used in several studies of connectomic architectures. However, through close inspection of n=437 distinct experiments used in a publicly-available connectome (Knox et al., 2018), we observed experiments with off-target injections, diffuse projections, unrealistically small injections and projections, and anatomical misalignments, affecting the accuracy and applicability of these connectivity experiments. We applied a combined automated and manual quality control (QC) and identified n=56 ([~]13% of the original n=437) experiments representing a wide variety of injection and projection failures across the brain. Automated QC was used to detect extreme injection and projection sizes and misalignments, while manual QC was used to detect subtle off-target tracer spreading. Using the remaining n=381 experiments, we rebuilt two different connectomes using previously-published methods; specifically: the regionalized voxel model from Knox et al. (2018), and the homogeneous model from Oh et al. (2014). Our rebuilt connectomes show strong losses in connectivity between regions with limited evidence of structural connectivity by other methods (e.g. hippocampus-medulla, cerebellum-isocortex) and gains in connectivity between regions with strong connectivity evidence (hypothalamus-cerebellum, hypothalamus-isocortex). Finally, we analyzed the rich club and community organization to demonstrate the potential downstream impacts on the representation of the overall structural connectome architectures of our QCd connectomes and observed subtle whole-brain organizational changes. We present our rebuilt connectomes, and particularly highlight the regionalized voxel model, as more accurate representations of structural connectivity derived from the AMBCA.
Bryant, A. G.
Show abstract
Recent years have seen an expanding repertoire of code-based tools for visualizing neuroimaging data, promoting reproducibility and interpretability of brain-mapping findings. However, most open-source visualization packages for the human brain are geared toward the cerebral cortex, and the comparatively few options for the subcortex and cerebellum are limited in scope (i.e., atlas support) and flexibility. We address this critical gap by introducing subcortex_visualization, an open-source package offered in both Python and R, that provides a unified and accessible framework for programmatically visualizing non-cortical data across many popular subcortical and cerebellar parcellation atlases. These visuals are inspired by the ggseg R package for cortical data, which implements standardized rendering conventions to facilitate comparison across atlases in a vectorized two-dimensional format. In addition to the vectorized versions of nine subcortical and cerebellar atlases--to our knowledge, the most comprehensive collection of non-cortical atlases in a single visualization toolbox--we also provide a step-by-step tutorial for users to generate custom vector-based visualizations from any given brain segmentation, enabling flexible extension to new atlases and structures. Collectively, subcortex_visualization and the accompanying documentation support reproducible and interpretable visualization of neuroimaging data below the cortical mantle.
Myrov, V.; Siebenhuhner, F.; Wang, S. H.; Arnulfo, G.; Juvonen, J. J.; Roascio, M.; Burlando, G.; Suleimanova, A.; Repo, J.; Liu, W.; Palva, S.; Palva, J. M.
Show abstract
CROCOpy is a light-weighted toolbox for the assessment of neuronal oscillations, and multiple observables of functional connectivity (phase synchronization, amplitude coupling, and cross-frequency coupling) and critical dynamics (avalanches, long-range temporal correlations, bistability, and functional excitation-inhibition ratio). It was developed to simplify the analysis of continuous electrophysiological recordings and, in addition to metric computation, also includes methods for narrow-band filtering and statistical analysis. It is device-agnostic and supports both GPU and CPU computations. The toolbox also provides detailed tutorials.
Mensah, S.; Atsu, E. K. A.; Ammah, P. N. T.
Show abstract
Brain tumors are one of the most life-threatening diseases, requiring precise and timely detection for effective treatment. Traditional methods for brain tumor detection rely heavily on manual analysis of MRI scans, which is time-consuming, subjective, and prone to human error. With advancements in deep learning, Convolutional Neural Networks (CNNs) have become popular for medical image analysis. However, CNNs are limited in their ability to capture spatial hierarchies and pose variations, which reduces their accuracy, particularly for tasks like brain tumor segmentation where precise spatial relationships are crucial. This research introduces a hybrid Capsule Neural Network (CapsNet) and ResNet50 model designed to overcome the limitations of traditional CNNs by capturing both spatial and pose information in MRI scans. The proposed model leverages ResNet50 for feature extraction and CapsNet for handling spatial relationships, leading to more accurate segmentation. The study evaluates the model on the BraTS2020 dataset and compares its performance to state-of-the-art CNN architectures, including U-Net and pure CNN models. The hybrid model, featuring a custom 5-cycle dynamic routing algorithm to enhance capsule agreement for tumor boundaries, achieved 98% accuracy and an F1-score of 0.87, demonstrating superior performance in detecting and segmenting brain tumors. This study pioneers the systematic evaluation of the ResNet50 + CapsNet hybrid on the BraTS2020 dataset, with a tailored class weighting scheme addressing class imbalance, improving effectiveness in identifying irregularly shaped tumors and smaller regions in identifying irregularly shaped tumors and smaller tumor regions. The study offers a robust solution for automating brain tumor detection. Future work will explore the use of Capsule Networks alone for brain tumor detection in MRI data and investigate alternative Capsule Network architectures, as well as their integration into clinical decision support systems.
Palm, G.; Paoletti, M.; Ito, J.; Stella, A.; Grün, S.
Show abstract
We propose a quality measure for spatio-temporal spike patterns (STPs) in multiple-neuron recordings. In such recordings, repeating STPs or pattern repetitions (PRs) are often found, with many of these generated by chance. To rule those out, statistical tests have been developed to discriminate the unlikely from the more likely PRs. This statistical problem is complicated by the fact that there are several obvious quality criteria for a PR, such as the size (the number of spikes) of the pattern and the number of its occurrences. Here, we propose a canonical way of combining several criteria (which we collect in the so-called signature of the pattern) into a single quality measure, based on the unlikeliness of the pattern. This measure is defined mathematically, and a formula for its computation is derived for stationary spike trains. It can be used to compare PRs. Since spike trains are not stationary in practice, we discuss, for two experimental data sets, how well the stationary formula correlates with the defined quality measure as determined from simulations. The results encourage the use of the stationary formula or also some simpler, related formulas as proxies for the quality, for the comparison of PRs and also for statistical tests that avoid the multiple testing problem incurred by using several quality criteria. Based on our results, we propose a few test statistics, i.e., random variables on the space of multi-unit spike trains with an appropriate null-hypothesis distribution, to evaluate STPs with less computational and sampling efforts.
Rangaprakash, D.; Barry, R. L.
Show abstract
Over the past two decades, open-source research software such as SPM, AFNI and FSL formed the substrate for advancements in the brain functional magnetic resonance imaging (fMRI) field. The spinal cord fMRI field has matured substantially over the past decade, yet there is limited research software tailored for processing cord fMRI data that has distinct noise sources, unique challenges, niche processing requirements and special needs. Spinal cord fMRI data analysis is a different beast, involving specialized pre- and post-processing steps due to the cords unique anatomy and higher distortions/physiological noise, thus requiring extensive and careful quality assessment. Building upon 10+ years of research and development, we present Neptune - a user-interface-based MATLAB toolbox. With 30,000+ lines of in-house code, it is designed to be easy to use and does not require programming knowledge. Neptune builds on our previously published 15-step pre-processing pipeline (Barry et al., 2016) and presents a 19-step pipeline with new processing steps, and enhancements to existing steps. Neptune has a 4-step post-processing pipeline aimed at fMRI connectivity modeling. It generates extensive and novel quality control visuals to enable a thorough assessment of data quality, and displays them in an elegant webpage format. We demonstrate the utility of Neptune on our 7T data. Certain features of the popular Spinal Cord Toolbox (SCT) are integrated into Neptune, and users can import/export between Neptune and other software such as FSL and SPM. The availability of this open-source, easy-to-use software will benefit the spinal cord fMRI community, and also tip the cost-benefit balance for brain fMRI researchers to invest in learning new software to conduct important neuroscientific and clinical research using spinal cord fMRI.
Szinte, M.; Bach, D. R.; Draschkow, D.; Esteban, O.; Gagl, B.; Gau, R.; Gregorova, K.; Halchenko, Y. O.; Huberty, S.; Huberty, S.; Kling, S. M.; Kulkarni, S.; Markiewicz, C. J.; Mikkelsen, M.; Oostenveld, R.; Oostenveld, R.; Pfarr, J.-K.
Show abstract
The Brain Imaging Data Structure (BIDS) is a widely adopted, community-driven standard to organize neuroimaging data and metadata. Although numerous extensions have been developed to incrementally extend coverage to new modalities and data types, an unambiguous, granular specification for eye-tracking recordings is lacking. Here, we present how BIDS will structure data and metadata produced by eye-tracking devices, including gaze position and pupil data. In addition to prescribing the organization of the unprocessed (raw) recordings and associated metadata as produced by the device, BEP20 also resolves gaps in current BIDS specifications beyond the scope of eye tracking. In particular, it adds a mechanism for including asynchronous model parameters and messages, such as contextual information, statuses, and events, such as triggers, generated by the device. BEP20 includes examples that illustrate its applicability in various experimental settings. This BIDS extension provides a robust standard that supports the development of self-adaptive, open, and automated eye-tracking data structures, thereby bolstering transparency and reliability of results in this field.
Dimitriadis, S. I.
Show abstract
Objective: Brain activity is measured using noninvasive electrophysiological techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG). Data recorded from sensors outside the skull are regularly transformed into a virtual source space. Brain activity is typically parcellated into anatomical brain areas using an atlas. Then, functional connectivity (FC) is estimated between pairs of regions, with their brain activity characterized by a representative time series extracted from multiple voxel time series (multidimensional), using various techniques. Several FC estimators have been used to quantify FC between pairs of brain areas. In contrast, multivariate extensions of these estimators have been proposed, thereby eliminating the need for representative time series for each brain area. Approach: An appropriate framework for systematically evaluating FC estimators in the virtual MEG space and across multiple processing steps for brain network construction is missing. Here, we compared an exhaustive set of bivariate FC estimators with techniques for extracting representative time series, their multivariate extensions, and multivariate estimators for detecting MCI subjects versus healthy controls, using a k-NN classifier and an appropriate graph distance metric. Main Results: Our results demonstrate that the multivariate extension of bivariate FC estimators (representative-free approach), which summarizes pairwise FC strength across all voxels of two brain areas, and accurate multivariate estimators that consider pairs of region-wise voxel time series at once, clearly outperform bivariate FC estimators based on representative time series. Significance: Multivariate extension of bivariate FC estimators and multivariate FC estimators are the natural alternatives to the combination of representative time series per brain area and bivariate FC estimators.
Li, S.; Zeng, D.; Dong, X.; He, Y.; Che, T.; Zhang, J.; Yang, Z.; Jiang, J.; Chu, L.; Han, Y.; Li, S.
Show abstract
A central objective in neuroscience is to elucidate how the brain generates complex dynamic activity through the interactions of brain areas. In this study, we utilized Interaction Network, a graph neural network model, to develop a computational framework for predicting whole-brain cortical blood oxygenation level dependent (BOLD) signals. We derived an Inter-Regional Interaction (IRI) metric to quantify information exchange among brain areas probing the underlying dynamical mechanisms. In addition, the total IRI emitted from each brain region was calculated and defined as the IRI sent by region (RS-IRI). Our model predicted the following 10 time points BOLD activity from initial BOLD signals, and achieved a mean absolute error of 0.04. The predicted functional connectivity (FC) achieves a correlation coefficient of 0.97 compared to the empirical FC. The fluctuation amplitude of the IRI increases with the length of the connection and the largest RS-IRI oscillation amplitude is observed in visual areas. The RS-IRI demonstrates a hierarchical organization, characterized by more concentrated distributions in association regions and larger fluctuation amplitudes in unimodal regions. Applying our approach to Alzheimers disease (AD), we demonstrate that the frequency-specific amplitudes of IRI oscillations discriminate AD patients from healthy controls and correlate with Mini-Mental State Examination scores. Together, this work presents a deep learning-based framework for modeling brain dynamics as well a quantitative index of inter-areal interactions, and offers a new perspective for disease characterization. Author SummaryThe human brain comprises distinct regions that interact through complex fiber tracts, forming the functional dynamics for diverse cognitive processes. We employed fMRI to assess functional activity and DTI to reconstruct fiber tract connectivity. To elucidate how brain function emerges from these inter-regional interactions, we developed a novel computational framework based on Graph Neural Network (GNN) to model the brains interactive dynamics for its capacity to uncover hidden and intricate patterns within data. From this model, we derived a quantitative metric termed Inter-Regional Interaction (IRI), which characterized the fine-grained, dynamic fluctuations in communication between brain areas. Our results suggest that this GNN-based model can accurately simulate brain functional activity and provide a quantitative description of neural interaction patterns. Applying this model to a cohort of Alzheimers disease patients, we demonstrated that the IRI metric not only effectively distinguished patients from healthy controls but also significantly correlated with clinical cognitive performance (MMSE scores). This approach advances our understanding of the fundamental principles of brain function and offers a promising tool for identifying the underlying mechanisms of neurological disorders.