Back

Developing SCL2205 : A Protein Sequence-based Spatial Modelling Dataset for the Protein Language Model Frontier

Ouso, D.; Pollastri, G.

2026-03-10 bioinformatics
10.64898/2026.03.08.710388 bioRxiv
Show abstract

Deep learning (DL) has advanced computational genome annotation tasks such as protein sub-cellular localisation (SCL) prediction. Nonetheless, its potential remains underutilised, primarily because of the limited availability of high-quality reference data and suboptimal input preparation strategies. In this study, we develop and analyse a high-quality dataset derived from the latest release of the universal protein knowledgebase (UniProtKB), designed to address existing challenges and support robust DL-based SCL modelling. The dataset was constructed through extensive quality preprocessing to ensure reliability, manual label mapping to enhance the quantity and diversity of the training data, and stringent partitioning to minimise data leakage. We validated the dataset using independent test sets, achieving up to 10.8% performance improvement, measured by the area under the precision-recall curve (PR-AUC), compared to the state-of-the-art (SoTA). Furthermore, we highlighted potential performance metric inflation in existing SoTA predictors by demonstrating, for the first time, at least 4.8% training-to-testing data leakage (pre-sequence representation) when using only 10% of the training set under homology augmentation (augmentation based on sequence similarity database searches; details in Sub-section 2.1), a commonly used data augmentation strategy in DL-based SCL prediction modelling. SCL2205 will efficiently support the development of robust, trustworthy, and generalisable DL-based SCL predictors, while minimising data leakage and promoting reproducibility. It is openly available under the Creative Commons Zero (CC0 1.0) licence on DRYAD and is conveniently deployed as a package on the Python Package Index - p-scldata.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
Bioinformatics
1061 papers in training set
Top 1%
18.7%
2
Bioinformatics Advances
184 papers in training set
Top 0.1%
14.4%
3
PLOS Computational Biology
1633 papers in training set
Top 5%
6.8%
4
Scientific Reports
3102 papers in training set
Top 24%
4.9%
5
Computational and Structural Biotechnology Journal
216 papers in training set
Top 1%
4.3%
6
NAR Genomics and Bioinformatics
214 papers in training set
Top 0.5%
4.0%
50% of probability mass above
7
Briefings in Bioinformatics
326 papers in training set
Top 2%
3.3%
8
Journal of Chemical Information and Modeling
207 papers in training set
Top 1%
3.1%
9
BMC Bioinformatics
383 papers in training set
Top 3%
2.6%
10
Nature Communications
4913 papers in training set
Top 44%
2.6%
11
Journal of Cheminformatics
25 papers in training set
Top 0.2%
2.1%
12
Frontiers in Bioinformatics
45 papers in training set
Top 0.1%
2.1%
13
GigaScience
172 papers in training set
Top 0.9%
2.1%
14
Nucleic Acids Research
1128 papers in training set
Top 9%
2.1%
15
Protein Science
221 papers in training set
Top 0.8%
1.7%
16
Journal of Molecular Biology
217 papers in training set
Top 2%
1.3%
17
Journal of Proteome Research
215 papers in training set
Top 2%
1.2%
18
Advanced Science
249 papers in training set
Top 14%
1.2%
19
Nature Methods
336 papers in training set
Top 5%
1.2%
20
PLOS ONE
4510 papers in training set
Top 63%
1.0%
21
International Journal of Molecular Sciences
453 papers in training set
Top 13%
0.9%
22
Scientific Data
174 papers in training set
Top 2%
0.9%
23
Nature Machine Intelligence
61 papers in training set
Top 3%
0.8%
24
eLife
5422 papers in training set
Top 58%
0.7%
25
Proteins: Structure, Function, and Bioinformatics
82 papers in training set
Top 1.0%
0.7%
26
Patterns
70 papers in training set
Top 3%
0.6%
27
PeerJ
261 papers in training set
Top 17%
0.6%