Back

Impact of Kernel Dimensionality on the Generalizability and Efficiency of Convolutional Neural Networks to Decode Neural Drive from High-density Electromyography Signal

Fu, J.; Huang, H. J.; Wen, Y.

2026-03-24 neuroscience
10.64898/2026.03.20.712696 bioRxiv
Show abstract

ObjectiveConvolutional neural networks (CNNs) have shown promise in decoding neural drive from high-density surface electromyography (HD-sEMG) signals. However, the effects of convolutional kernel dimensionality on the generalizability and computational efficiency of CNN-based neural drive decoding remain unclear. This study systematically examined how the dimensionality of convolutional kernels (1D, 2D, and 3D) affects both the generalizability and computational efficiency of CNN-based neural drive decoding. ApproachThree CNN architectures differing only in the dimensionality of their convolutional kernels were implemented to extract temporal (1D), spatial (2D), or spatiotemporal (3D) features from HD-sEMG signals of isometric knee extension, ankle plantarflexion at three intensities. Each CNN was repeatedly trained using subsets of a pooled training dataset with varying sizes. Cross-intensity and cross-muscle generalizability were assessed by the correlation coefficient between neural drive from deep CNN and that from golden standard blind source separation (BSS) algorithms. Computational efficiency was assessed by measuring inference time on both CPU and GPU platforms. Main ResultsAll CNN architectures demonstrated generalizability across contraction intensities and muscles. For cross contraction intensities, the 1D, 2D, and 3D CNNs achieved mean correlation coefficients of 0.986 {+/-} 0.009, 0.987 {+/-} 0.010, and 0.987 {+/-} 0.010, respectively. For cross-muscle generalizability, the corresponding correlation coefficients were 0.961 {+/-} 0.051, 0.965 {+/-} 0.049, and 0.968 {+/-} 0.046. In terms of efficiency, the 3D CNN was the least computationally efficient, with inference times of 4.1 ms per sample on the CPU and 1.2 ms per sample on the GPU. SignificanceThese findings demonstrate that increased CNN architectural complexity does not necessarily yield superior generalizability in neural drive decoding from HD-sEMG signals. The results provide practical guidance for balancing decoding performance and computational efficiency in HD-sEMG-based neural-machine interfaces.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Journal of Neural Engineering
197 papers in training set
Top 0.1%
39.9%
2
eneuro
389 papers in training set
Top 1%
4.9%
3
PLOS Computational Biology
1633 papers in training set
Top 7%
4.6%
4
Human Brain Mapping
295 papers in training set
Top 1%
4.4%
50% of probability mass above
5
Journal of Neuroscience Methods
106 papers in training set
Top 0.3%
4.0%
6
Scientific Reports
3102 papers in training set
Top 30%
4.0%
7
NeuroImage
813 papers in training set
Top 3%
3.6%
8
Imaging Neuroscience
242 papers in training set
Top 1%
2.9%
9
Journal of NeuroEngineering and Rehabilitation
28 papers in training set
Top 0.4%
2.8%
10
IEEE Transactions on Biomedical Engineering
38 papers in training set
Top 0.5%
1.7%
11
Frontiers in Human Neuroscience
67 papers in training set
Top 1%
1.7%
12
Brain Stimulation
112 papers in training set
Top 0.9%
1.7%
13
PLOS ONE
4510 papers in training set
Top 54%
1.7%
14
Frontiers in Neuroscience
223 papers in training set
Top 5%
1.2%
15
IEEE Transactions on Neural Systems and Rehabilitation Engineering
40 papers in training set
Top 0.4%
1.2%
16
Communications Biology
886 papers in training set
Top 14%
1.2%
17
Journal of Neurophysiology
263 papers in training set
Top 0.6%
1.2%
18
Clinical Neurophysiology
50 papers in training set
Top 0.5%
1.1%
19
Annals of Clinical and Translational Neurology
29 papers in training set
Top 1%
0.8%
20
eLife
5422 papers in training set
Top 59%
0.7%
21
Frontiers in Neurology
91 papers in training set
Top 5%
0.7%
22
eBioMedicine
130 papers in training set
Top 5%
0.7%
23
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 3%
0.5%
24
Sensors
39 papers in training set
Top 2%
0.5%