Back

Neurocomputing

Elsevier BV

All preprints, ranked by how well they match Neurocomputing's content profile, based on 13 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
Efficient Deep Network Architecture for COVID-19 Detection Using Computed Tomography Images

Goel, C.; Kumar, A.; Dubey, S. K.; Srivastava, V.

2020-08-17 radiology and imaging 10.1101/2020.08.14.20170290 medRxiv
Top 0.1%
14.8%
Show abstract

Globally the devastating consequence of COVID-19 or Severe Acute Respiratory Syndrome-Coronavirus (SARS-CoV-2) has posed danger on the life of living beings. Doctors and scientists throughout the world are working day and night to combat the proliferation or transmission of this deadly disease in terms of technology, finances, data repositories, protective equipment, and many other services. Rapid and efficient detection of COVID-19 reduces the rate of spreading this deadly disease and early treatment improve the recovery rate. In this paper, we proposed a new framework to exploit powerful features extracted from the autoencoder and Gray Level Co-occurence Matrix (GLCM), combined with random forest algorithm for the efficient and fast detection of COVID-19 using computed tomographic images. The models performance is evident from its 97.78% accuracy, 96.78% recall, and 98.77% specificity.

2
Memory Consolidation with Orthogonal Gradients for avoiding Catastrophic Forgetting

Kanagamani, T.; Krishnamurthy, R.; Chakravarthy, S.; Ravindran, B.; Menon, R. N.

2022-02-28 neuroscience 10.1101/2022.02.25.481890 medRxiv
Top 0.1%
14.1%
Show abstract

The memory consolidation process enables the accumulation of recent and remote memories in the long-term memory store. In general, the deep network models of memory suffer from forgetting old information while learning new information, called catastrophic forgetting/interference. The human brain overcomes this problem quite effectively, a problem that continues to challenge current deep neural network models. We propose a regularization-based model to solve the problem of catastrophic forgetting. According to the proposed training mechanism, the network parameters are constrained to vary in a direction orthogonal to the average of the error gradients corresponding to the previous tasks. We also ensure that the constraint used in parameter updating satisfies the locality principle. The proposed models performance is compared with Elastic Weight Consolidation on standard datasets such as permuted MNIST and split MNIST on classification tasks using fully connected networks, and Convolution-based networks. The model performance is also compared to an autoencoder on split MNIST dataset, and to complex core50 dataset on two types of classification tasks with EWC. The proposed model gives a new view on plasticity at the neuronal level. In the proposed model, the parameter updating is controlled by the neuronal level plasticity rather than synapse level plasticity as in other standard models. The biological plausibility of the proposed model is discussed by linking the extra parameters to synaptic tagging, which represents the state of the synapse involved in Long Term Potentiation (LTP).

3
Numerical reproduction of the Sherrington-Adrian observations through a community of McCulloch-Pitts neurons with plastic remodelling

Irastorza Valera, L.; Benitez Baena, J. M.; Montans, F.; Saucedo-Mora, L.

2023-12-07 bioengineering 10.1101/2023.12.05.570084 medRxiv
Top 0.1%
12.4%
Show abstract

Neurons form a highly complex network that produces cognition from simple associative rules. From previous results, this work shows the natural capability of the numerical network produced to modulate the output signal with independence of the intensity of the stimuli. Moreover, the plastic remodelling implemented in the model is capable to change the latency of a wide range of stimuli to synchronize them and adjust to a required delay of the signal.

4
Deep Coupled Kuramoto Oscillatory Neural Network (DcKONN): A Biologically Inspired Deep Neural Model for EEG Signal Analysis

Ghosh, S.

2025-09-30 bioengineering 10.1101/2025.09.26.678831 medRxiv
Top 0.1%
10.5%
Show abstract

Deep neural networks applied to signal processing tasks often need specialized architectural mechanisms to capture the temporal history of input signals. Traditional approaches include recurrent loops between layers, gated units, or tapped delay lines. However, biological brains exhibit much richer dynamics, characterized by activity across multiple frequency bands (alpha, beta, gamma, delta) and phenomena such as phase locking and synchronization. Standard Recurrent Neural Networks (RNNs) are limited in their ability to represent these complex dynamical features. In this work, we introduce a novel framework called the Deep Coupled Kuramoto Oscillatory Neural Network (DcKONN), which leverages networks of nonlinear Kuramoto oscillators trained in a deep learning paradigm. The DcKONN architecture has been applied to EEG signal classifier task. Simulation results demonstrate that the proposed oscillatory neural networks achieve superior or comparable classification accuracy compared to existing state-of-the-art models. Beyond performance improvements, these models also provide valuable neurobiological insights by naturally incorporating oscillatory dynamics into their architecture.

5
Brain-inspired Weighted Normalization for CNN Image Classification

Pan, X.; Kartal, E.; Sanchez Giraldo, L. G.; Schwartz, O.

2021-05-22 neuroscience 10.1101/2021.05.20.445029 medRxiv
Top 0.1%
10.2%
Show abstract

AO_SCPLOWBSTRACTC_SCPLOWWe studied a local normalization paradigm, namely weighted normalization, that better reflects the current understanding of the brain. Specifically, the normalization weight is trainable, and has a more realistic surround pool selection. Weighted normalization outperformed other normalizations in image classification tasks on Cifar10, Imagenet and a customized textured MNIST dataset. The superior performance is more prominent when the CNN is shallow. The good performance of weighted normalization may be related to its statistical effect of gaussianizing the responses.

6
Inferring Neuron-level Brain Circuit Connection via Graph Neural Network Amidst Small Established Connections

Wan, G.; Liao, M.; Zhao, D.; Wang, Z.; Pan, S.; Du, B.

2023-07-02 neuroscience 10.1101/2023.06.29.547138 medRxiv
Top 0.1%
10.2%
Show abstract

MotivationReconstructing neuron-level brain circuit network is a universally recognized formidable task. A significant impediment involves discerning the intricate interconnections among multitudinous neurons in a complex brain network. However, the majority of current methodologies only rely on learning local visual synapse features while neglecting the incorporation of comprehensive global topological connectivity information. In this paper, we consider the perspective of network connectivity and introduce graph neural networks to learn the topological features of brain networks. As a result, we propose Neuronal Circuit Prediction Network (NCPNet), a simple and effective model to jointly learn node structural representation and neighborhood representation, constructing neuronal connection pair feature for inferring neuron-level connections in a brain circuit network. ResultsWe use a small number of connections randomly selected from a single brain circuit network as training data, expecting NCPNet to extrapolate known connections to unseen instances. We evaluated our model on Drosophila connectome and C. elegans worm connectome. The numerical results demonstrate that our model achieves a prediction accuracy of 91.88% for neuronal connections in the Drosophila connectome when utilizing only 5% of known connections. Similarly, under the condition of 5% known connections in C. elegans, our model achieves an accuracy of 93.79%. Additional qualitative analysis conducted on the learned representation vectors of Kenyon cells indicates that NCPNet successfully acquires meaningful features that enable the discrimination of neuronal sub-types. Our project is available at https://github.com/mxz12119/NCPNet.

7
Transfer Learning for COVID-19 Pneumonia Detection and Classification in Chest X-ray Images

Katsamenis, I.; Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Doulamis, N.

2020-12-16 radiology and imaging 10.1101/2020.12.14.20248158 medRxiv
Top 0.1%
10.1%
Show abstract

We introduce a deep learning framework that can detect COVID-19 pneumonia in thoracic radiographs, as well as differentiate it from bacterial pneumonia infection. Deep classification models, such as convolutional neural networks (CNNs), require large-scale datasets in order to be trained and perform properly. Since the number of X-ray samples related to COVID-19 is limited, transfer learning (TL) appears as the go-to method to alleviate the demand for training data and develop accurate automated diagnosis models. In this context, networks are able to gain knowledge from pretrained networks on large-scale image datasets or alternative data-rich sources (i.e. bacterial and viral pneumonia radiographs). The experimental results indicate that the TL approach outperforms the performance obtained without TL, for the COVID-19 classification task in chest X-ray images.

8
Online COVID-19 diagnosis with chest CT images: Lesion-attention deep neural networks

Liu, B.; Gao, X.; He, M.; Lv, F.; Yin, G.

2020-05-29 radiology and imaging 10.1101/2020.05.11.20097907 medRxiv
Top 0.1%
10.1%
Show abstract

Chest computed tomography (CT) scanning is one of the most important technologies for COVID-19 diagnosis and disease monitoring, particularly for early detection of coronavirus. Recent advancements in computer vision motivate more concerted efforts in developing AI-driven diagnostic tools to accommodate the enormous demands for the COVID-19 diagnostic tests globally. To help alleviate burdens on medical systems, we develop a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative with a richly annotated chest CT image dataset. Based on the textual radiological report accompanied with each CT image, we extract two types of important information for the annotations: One is the indicator of a positive or negative case of COVID-19, and the other is the description of five lesions on the CT images associated with the positive cases. The proposed data-efficient LA-DNN model focuses on the primary task of binary classification for COVID-19 diagnosis, while an auxiliary multi-label learning task is implemented simultaneously to draw the models attention to the five lesions associated with COVID-19. The joint task learning process makes it a highly sample-efficient deep neural network that can learn COVID-19 radiology features more effectively with limited but high-quality, rich-information samples. The experimental results show that the area under the curve (AUC) and sensitivity (recall), precision, and accuracy for COVID-19 diagnosis are 94.0%, 88.8%, 87.9%, and 88.6% respectively, which reach the clinical standards for practical use. A free online system is currently alive for fast diagnosis using CT images at the website https://www.covidct.cn/, and all codes and datasets are freely accessible at our github address.

9
Upgrading Voxel-wise Encoding Model via Integrated Integration over Features and Brain Networks

Li, Y.; Yang, H.; Gu, S.

2022-11-07 neuroscience 10.1101/2022.11.06.515387 medRxiv
Top 0.1%
9.9%
Show abstract

A central goal of cognitive neuroscience is to build computational models that predict and explain neural responses to sensory inputs in the cortex. Recent studies attempt to borrow the representation power of deep neural networks (DNN) to predict the brain response and suggest a correspondence between artificial and biological neural networks in their feature representations. However, each DNN instance is often specified for certain computer vision tasks which may not lead to optimal brain correspondence. On the other hand, these voxel-wise encoding models focus on predicting single voxels independently, while brain activity often demonstrates rich and dynamic structures at the population and network levels during cognitive tasks. These two important properties suggest that we can improve the prevalent voxel-wise encoding models by integrating features from DNN models and by integrating cortical network information into the models. In this work, we propose a new unified framework that addresses these two aspects through DNN feature-level ensemble learning and brain atlas-level model integration. Our proposed approach leads to superior performance over previous DNN-based encoding models in predicting whole-brain neural activity during naturalistic video perception. Furthermore, our unified framework also facilitates the investigation of the brains neural representation mechanism by accurately predicting the neural response corresponding to complex visual concepts.

10
Self-Configuring Capsule Networks for Brain Image Segmentation

Avesta, A. E.; Hossain, S.; Aboian, M.; Krumholz, H.; Aneja, S.

2023-03-03 radiology and imaging 10.1101/2023.02.28.23286596 medRxiv
Top 0.1%
9.2%
Show abstract

When an auto-segmentation model needs to be applied to a new segmentation task, multiple decisions should be made about the pre-processing steps and training hyperparameters. These decisions are cumbersome and require a high level of expertise. To remedy this problem, I developed self-configuring CapsNets (scCapsNets) that can scan the training data as well as the computational resources that are available, and then self-configure most of their design options. In this study, we developed a self-configuring capsule network that can configure its design options with minimal user input. We showed that our self-configuring capsule netwrok can segment brain tumor components, namely edema and enhancing core of brain tumors, with high accuracy. Out model outperforms UNet-based models in the absence of data augmentation, is faster to train, and is computationally more efficient compared to UNet-based models.

11
A deep network-based model of hippocampal memory functions under normal and Alzheimer's disease conditions

Kanagamani, T.; Chakaravarthy, V. S.; Ravindran, B.

2021-02-02 neuroscience 10.1101/2021.01.31.429076 medRxiv
Top 0.1%
9.1%
Show abstract

We present a deep network-based model of the associative memory functions of the hippocampus. The proposed network architecture has two key modules: 1) an autoencoder module which represents the forward and backward projections of the cortico-hippocampal projections and 2) a module that computes familiarity of the stimulus and implements hill-climbing over the familiarity which represents the dynamics of the loops within the hippocampus. The proposed network is used in two simulation studies. In the first part of the study, the network is used to simulate image pattern completion by autoassociation under normal conditions. In the second part of the study, the proposed network is extended to a heteroassociative memory and is used to simulate picture naming task in normal and Alzheimers disease (AD) conditions. The network is trained on pictures and names of digits from 0 - 9. The encoder layer of the network is partly damaged to simulate AD conditions. As in case of AD patients, under moderate damage condition, the network recalls superordinate words ("odd" instead of "nine"). Under severe damage conditions, the network shows a null response ("I dont know"). Neurobiological plausibility of the model is extensively discussed.

12
A whole-brain model of auditory discrimination

Turan, A.; Baspinar, E.; Destexhe, A.

2023-09-23 neuroscience 10.1101/2023.09.23.559095 medRxiv
Top 0.1%
9.1%
Show abstract

Whole-brain simulations have been proposed previously to simulate global properties such as brain states or functional connectivity. Here, our aim is to build a whole-brain model to simulate a simple cognitive paradigm involving multiple brain areas. We focus on auditory discrimination, using a paradigm designed for the macaque cortex. To model at the whole-brain scale, we use The Virtual Brain (TVB) [18] simulation environment. TVB is a computational framework which simulates the brain as a network of small brain regions, where each node models neuronal populations and the connectivity between nodes determines the pathway of information flow over the brain. We use Adaptive Exponential (AdEx) neuronal population models [4, 11] to describe each node. For the connectivity, we use the open-access CoCoMac connectivity dataset [2], which is a matrix containing the connection weights between the nodes. We focus on a cognitive task that mainly involves the prefrontal cortex (PFC). In the auditory discrimination task, our pipeline starts from the primary auditory cortex stimulated by the auditory signals, it is then modulated in the PFC so that the stimulus discrimination occurs, after competition. Finally, it ends in the primary motor cortex which outputs the neuronal activity determining the motor action. Because the AdEx mean-fields can provide access to neuronal activity or local field potentials, we think that the present model constitutes a useful tool to promote interactions between theory and experiments for simple cognitive tasks in macaque monkey.

13
A Computational Theory of Learning Flexible Reward-Seeking Behavior with Place Cells

Gao, Y.

2022-04-25 neuroscience 10.1101/2022.04.23.489289 medRxiv
Top 0.1%
8.8%
Show abstract

An important open question in computational neuroscience is how various spatially tuned neurons, such as place cells, are used to support the learning of reward-seeking behavior of an animal. Existing computational models either lack biological plausibility or fall short of behavioral flexibility when environments change. In this paper, we propose a computational theory that achieves behavioral flexibility with better biological plausibility. We first train a mixture of Gaussian distributions to model the ensemble of firing fields of place cells. Then we propose a Hebbian-like rule to learn the synaptic strength matrix among place cells. This matrix is interpreted as the transition rate matrix of a continuous time Markov chain to generate the sequential replay of place cells. During replay, the synaptic strengths from place cells to medium spiny neurons (MSN) are learned by a temporal-difference like rule to store place-reward associations. After replay, the activation of MSN will ramp up when an animal approaches the rewarding place, so the animal can move along the direction where the MSN activation is increasing to find the rewarding place. We implement our theory into a high-fidelity virtual rat in the MuJoCo physics simulator. In a complex maze, the rat shows significantly better learning efficiency and behavioral flexibility than a rat that implements a neuroscience-inspired reinforcement learning algorithm, deep Q-network.

14
Class dependency based learning using Bi-LSTM coupled with the transfer learning of VGG16 for the diagnosis of Tuberculosis from chest x-rays

Gutta, J. C.; G, S.; M, P.; K, K.

2021-07-22 radiology and imaging 10.1101/2021.07.18.21260738 medRxiv
Top 0.1%
8.5%
Show abstract

Tuberculosis is an infectious disease that is leadingto the death of millions of people across the world. The mortalityrate of this disease is high in patients suffering from immuno-compromised disorders. The early diagnosis of this disease cansave lives and can avoid further complications. But the diagnosisof TB is a very complex task. The standard diagnostic tests stillrely on traditional procedures developed in the last century. Theseprocedures are slow and expensive. So this paper presents anautomatic approach for the diagnosis of TB from posteroanteriorchest x-rays. This is a two-step approach, where in the first stepthe lung regions are segmented from the chest x-rays using thegraph cut method, and then in the second step the transfer learn-ing of VGG16 combined with Bi-directional LSTM is used forextracting high-level discriminative features from the segmentedlung regions and then classification is performed using a fullyconnected layer. The proposed model is evaluated using data fromtwo publicly available databases namely Montgomery Countryset and Schezien set. The proposed model achieved accuracy andsensitivity of 97.76%, 97.01%and 96.42%, 94.11%on Schezienand Montgomery county datasets. This model enhanced thediagnostic accuracy of TB by 0.7%and 11.68%on Schezien andMontgomery county datasets.

15
Predictive Motor Control Based on a Generative Adversarial Network

Lenninger, M.; Choi, W.-H.; Choi, H.

2023-03-07 neuroscience 10.1101/2023.01.17.524156 medRxiv
Top 0.1%
8.4%
Show abstract

Predictive processing models suggest that a brain decides actions through inference using its internal generative model over the worlds states and their transitions. Most of the predictive processing models have been formalized using explicit representations of the probability distributions, with explicit structures and parameters. They are difficult to learn in general and needs explanation about representation of structure and parameters of distributions, and the method for statistical arithmetic on them, each of which are questions not easy to answer. In this study, we explore an alternative representation for predictive processing which is based on an implicit model known as generative adversarial networks, which has been widely explored recently in machine learning studies as they can learn a distribution directly from data. We demonstrate how a generative adversarial network can be trained to learn an implicit generative model of motor dynamics. And then, we show that such a model can perform approximate inference using the trained model, providing the necessary computations for both the forward and inverse model of motor control. Our framework may provide another formalization for brains inference model, especially for learning process. Additionally, we suggest that the functional architecture of the cortical-basal ganglia circuit may modeled as the generator and discriminator in the generative adversarial network model.

16
Toward One-Shot Learning in Neuroscience-Inspired Deep Spiking Neural Networks

Faghihi, F.; Molhem, H.; Moustafa, A.

2019-11-04 neuroscience 10.1101/829556 medRxiv
Top 0.1%
8.3%
Show abstract

Conventional deep neural networks capture essential information processing stages in perception. Deep neural networks often require very large volume of training examples, whereas children can learn concepts such as hand-written digits with few examples. The goal of this project is to develop a deep spiking neural network that can learn from few training trials. Using known neuronal mechanisms, a spiking neural network model is developed and trained to recognize hand-written digits with presenting one to four training examples for each digit taken from the MNIST database. The model detects and learns geometric features of the images from MNIST database. In this work, a novel biological back-propagation based learning rule is developed and used to a train the network to detect basic features of different digits. For this purpose, randomly initialized synaptic weights between the layers are being updated. By using a neuroscience inspired mechanism named synaptic pruning and a predefined threshold, some of the synapses through the training are deleted. Hence, information channels are constructed that are highly specific for each digit as matrix of synaptic connections between two layers of spiking neural networks. These connection matrixes named information channels are used in the test phase to assign a digit class to each test image. As similar to humans abilities to learn from small training trials, the developed spiking neural network needs a very small dataset for training, compared to conventional deep learning methods checked on MNIST dataset.

17
Super-resolution segmentation network for reconstruction of packed neurites

Zhou, H.; Quan, T.; Huang, Q.; Liu, T.; Cao, T.; Zeng, S.

2020-06-11 neuroscience 10.1101/2020.06.09.143347 medRxiv
Top 0.1%
7.0%
Show abstract

Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in the field of brain research. However, the difficulty in reconstructing packed neuritis, wherein massive labor is required for accurate reconstruction in most cases, has not been resolved. In this work, we provide a fundamental pathway for solving this challenge by proposing the use of the super-resolution segmentation network (SRSNet) that builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution space. SRSNet focuses on enlarging the distances between the boundaries of the packed neurites producing the high-resolution segmentation images. Thus, in the construction of the training datasets, only the traced skeletons of neurites are required, which vastly increase the usability of SRSNet. From the results of the experiments conducted in this work, it has been observed that SRSNet achieves accurate reconstruction of packed neurites where the other state-of-the-art methods fail.

18
Time-varying hierarchical core voxels disclosed by k-core percolation on dynamic inter-voxel connectivity resting-state fMRI

Lee, D. S.; Huh, Y.; Kang, Y. K.; Whi, W.; Lee, H.; Kang, H.

2022-06-26 neuroscience 10.1101/2022.06.23.497413 medRxiv
Top 0.1%
7.0%
Show abstract

k-core percolation on the scale-free static brain connectivity revealed hierarchical structure of inter-voxel correlations, which was successfully visualized by hyperbolic disc embedding on resting-state fMRI. In static study, flagplots and brain rendered kmax-core display showed the changes of hierarchical structures of voxels belonging to functional independent components (IC). In this dynamic sliding-window study, temporal progress of hierarchical structure of voxels were investigated in individuals and in sessions of an individual. kmax-core and coreness k values characterizing time-varying core voxels were visualized on animated stacked-histogram/flagplots and animated brain-rendered images. Resting-state fMRI of Human Connectome Project and of Kirby weekly revealed the slow progress and multiple abrupt state transitions of the voxels of coreness k and at the uppermost hierarchy, representing their correlative time-varying mental states in individuals and in sessions. We suggest this characteristic core voxels-IC compositions on dynamic study fingerprint the time-varying resting states of human minds. One Sentence SummaryDynamic state transitions of hierarchical functional inter-voxel connectivity implied time-varying mental states at rest on fMRI

19
Corrective feedback control of competing neural network with entire connections

Wang, U.

2022-01-12 neuroscience 10.1101/2022.01.10.475737 medRxiv
Top 0.1%
6.9%
Show abstract

Continuous persist activity of the competitive network is related to many functions, such as working memory, oculomotor integrator and decision making. Many competition models with mutual inhibition structures achieve activity maintenance via positive feedback, which requires meticulous fine tuning of the network parameters strictly. Negative derivative feedback, according to recent research, might represent a novel mechanism for sustaining neural activity that is more resistant to multiple neural perturbations than positive feedback. Many classic models with only mutual inhibition structure are not capable of providing negative derivative feedback because double-inhibition acts as a positive feedback loop, and lack of negative feedback loop that is indispensable for negative derivative feedback. Here in the proposal, we aim to derive a new competition network with negative derivative feedback. The network is made up of two symmetric pairs of EI populations that the four population are completely connected. We conclude that the negative derivative occurs in two circumstances, in which one the activity of the two sides is synchronous but push-pull-like in the other, as well as the switch of two conditions in mathematical analysis and numerical simulation.

20
The development, recognition, and learning mechanisms of an animal-like neural network

Qi, F.; Wu, W.

2019-06-01 neuroscience 10.1101/535724 medRxiv
Top 0.1%
6.9%
Show abstract

How animal neural system addresses the object identity-preserving recognition problem is largely unknown. Artificial neural network such as convolution network (CNN) has reached human level performance in recognition tasks, however, animal neural system does not support such kernel scanning operation across retinal neurons, and thus the neuronal responses do not match that of CNN units. Here, we used an alternative recognition-reconstruction network (RRN) architecture as an analogy to animal-like system, and the resulting neural characteristics agreed fairly well with electrophysiological measurements in monkey studies. First, in network development study, the RRN also experienced critical developmental stages characterized by specificities in neuronal types, connectivity strength and firing pattern, from early stage of coarse salience map recognition to mature stage of fine structure recognition. In digit recognition study, we witnessed that the RRN could maintain object invariance representation under various viewing conditions by coordinated adjustment of responses of population neurons. And such concerted population responses contained untangled object identity and properties information that could be accurately extracted via a simple weighted summation decoder. In the learning and forgetting study, novel structure recognition was implemented by adjusting entire synapses in low magnitude while pattern specificities of original synaptic connectivity were preserved, which guaranteed a learning process without disrupting the existing functionalities. This work benefits the understanding of human neural mechanism and the development of humane-like intelligence.