Back

TF-IDF k-mer-based Classical and Hybrid Machine Learning Models for SARS-CoV-2 Variant Classification under Imbalanced Genomic Data

Haque, N.; Mazed, A.; Ankhi, J. N.; Uddin, M. J.

2026-04-02 bioinformatics
10.64898/2026.04.02.716024 bioRxiv
Show abstract

Accurate classification of SARS-CoV-2 genomic variants is essential for effective genomic surveillance, yet it is challenged by extreme class imbalance, limited representation of rare variants, and distribution shifts in real-world sequencing data. In this study, we employed hybrid RF-SVM framework designed for robust detection of rare SARS-CoV-2 variants. It integrates a random forest and a polynomial-kernel based support vector machine to enhance sensitivity to minority classes while maintaining overall predictive stability. We systematically compared classical machine learning models, deep learning approaches, and hybrid strategies under both standard and distribution-shifted evaluation settings. Our results show that classical models using TF-IDF-based k-mer features outperform deep learning methods on macro-averaged performance metrics. The Random Forest classifier using TF-IDF Feature achieved the best overall performance, with a macro-averaged F1-score of 0.8894 and an accuracy of 96.3%. The model also demonstrated strong generalization ability, as evidenced by stable cross-validation performance (CV accuracy = 0.9637). Hybrid RF-SVM model further improves rare variant detection under severe class imbalance. Calibration analysis indicates reliable probability estimates for common variants, although challenges persist for minority classes. Overall, this study highlights the limitations of deep learning in highly imbalanced genomic settings and demonstrates that carefully designed hybrid machine learning approaches provide an effective and interpretable solution for rare SARS-CoV-2 variant detection.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
Briefings in Bioinformatics
326 papers in training set
Top 0.1%
22.3%
2
BMC Bioinformatics
383 papers in training set
Top 1%
8.3%
3
Scientific Reports
3102 papers in training set
Top 19%
6.3%
4
PLOS Computational Biology
1633 papers in training set
Top 6%
6.3%
5
PLOS ONE
4510 papers in training set
Top 35%
4.1%
6
Computational and Structural Biotechnology Journal
216 papers in training set
Top 2%
3.6%
50% of probability mass above
7
IEEE/ACM Transactions on Computational Biology and Bioinformatics
32 papers in training set
Top 0.1%
3.2%
8
NAR Genomics and Bioinformatics
214 papers in training set
Top 0.9%
3.0%
9
Frontiers in Genetics
197 papers in training set
Top 3%
2.7%
10
Bioinformatics
1061 papers in training set
Top 7%
2.1%
11
Nucleic Acids Research
1128 papers in training set
Top 9%
1.9%
12
GigaScience
172 papers in training set
Top 1%
1.9%
13
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 0.9%
1.8%
14
Bioinformatics Advances
184 papers in training set
Top 3%
1.7%
15
IEEE Transactions on Computational Biology and Bioinformatics
17 papers in training set
Top 0.2%
1.7%
16
International Journal of Molecular Sciences
453 papers in training set
Top 8%
1.7%
17
BioData Mining
15 papers in training set
Top 0.3%
1.7%
18
BMC Genomics
328 papers in training set
Top 2%
1.7%
19
Nature Communications
4913 papers in training set
Top 54%
1.5%
20
Genome Medicine
154 papers in training set
Top 5%
1.3%
21
Advanced Science
249 papers in training set
Top 13%
1.3%
22
Communications Biology
886 papers in training set
Top 15%
1.2%
23
Frontiers in Bioinformatics
45 papers in training set
Top 0.6%
0.9%
24
Journal of Chemical Information and Modeling
207 papers in training set
Top 3%
0.9%
25
Frontiers in Molecular Biosciences
100 papers in training set
Top 4%
0.9%
26
BMC Medical Genomics
36 papers in training set
Top 1%
0.8%
27
iScience
1063 papers in training set
Top 30%
0.8%
28
Computers in Biology and Medicine
120 papers in training set
Top 4%
0.8%
29
Nature Machine Intelligence
61 papers in training set
Top 3%
0.7%
30
Informatics in Medicine Unlocked
21 papers in training set
Top 1%
0.7%