Back

Evaluating Model Performance with Hard-Swish Activation Function Adjustments

Nguyen, H. P.

2024-12-18 surgery
10.1101/2024.12.18.24319237 medRxiv
Show abstract

In the field of pattern recognition, achieving high accuracy is essential. While training a model to recognize different complex images, it is vital to fine-tune the model to achieve the highest accuracy possible. One strategy for fine-tuning a model involves changing its activation function. Most pre-trained models use ReLU as their default activation function, but switching to a different activation function like Hard-Swish could be beneficial. This study evaluates the performance of models using ReLU, Swish and Hard-Swish activation functions across diverse image datasets. Our results show a 2.06% increase in accuracy for models on the CIFAR-10 dataset and a 0.30% increase in accuracy for models on the ATLAS dataset. Modifying the activation functions in architecture of pre-trained models lead to improved overall accuracy.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
Biology Methods and Protocols
53 papers in training set
Top 0.1%
18.5%
2
PLOS ONE
4510 papers in training set
Top 12%
14.9%
3
Scientific Reports
3102 papers in training set
Top 13%
6.9%
4
PLOS Computational Biology
1633 papers in training set
Top 5%
6.4%
5
Heliyon
146 papers in training set
Top 0.1%
4.9%
50% of probability mass above
6
Human Brain Mapping
295 papers in training set
Top 2%
3.6%
7
Cancers
200 papers in training set
Top 2%
3.1%
8
Bioengineering
24 papers in training set
Top 0.1%
3.1%
9
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.9%
2.4%
10
Medical Image Analysis
33 papers in training set
Top 0.6%
1.7%
11
IEEE Access
31 papers in training set
Top 0.3%
1.7%
12
Neuroinformatics
40 papers in training set
Top 0.6%
1.3%
13
BMC Medical Informatics and Decision Making
39 papers in training set
Top 2%
1.3%
14
Brain Communications
147 papers in training set
Top 2%
1.2%
15
Frontiers in Neuroscience
223 papers in training set
Top 6%
1.0%
16
Journal of Medical Imaging
11 papers in training set
Top 0.2%
1.0%
17
Neural Networks
32 papers in training set
Top 0.6%
1.0%
18
BMC Neurology
12 papers in training set
Top 0.7%
0.9%
19
Bioinformatics
1061 papers in training set
Top 9%
0.8%
20
eBioMedicine
130 papers in training set
Top 3%
0.8%
21
Frontiers in Plant Science
240 papers in training set
Top 5%
0.8%
22
PLOS Digital Health
91 papers in training set
Top 2%
0.8%
23
Sensors
39 papers in training set
Top 2%
0.8%
24
JMIRx Med
31 papers in training set
Top 2%
0.8%
25
Mathematics
11 papers in training set
Top 0.4%
0.8%
26
Informatics in Medicine Unlocked
21 papers in training set
Top 1%
0.8%
27
Communications Biology
886 papers in training set
Top 23%
0.8%
28
Biomedical Optics Express
84 papers in training set
Top 1%
0.8%
29
Neurocomputing
13 papers in training set
Top 0.6%
0.8%
30
Nature Communications
4913 papers in training set
Top 65%
0.7%