Back

Evaluating Expert Specialization in Mixture-of-Experts Antibody Language Models

Burbach, S. M.; Spandau, S.; Hurtado, J.; Briney, B.

2026-04-22 immunology
10.64898/2026.04.17.719246 bioRxiv
Show abstract

AO_SCPLOWBSTRACTC_SCPLOWAntibody language models (AbLMs) show an impressive aptitude for learning antibody features, but tend to struggle learning the highly diverse, non-templated regions of antibodies. Existing AbLMs use dense architectures, where all model parameters attend to each amino acid token. We hypothesized that the modular nature of antibodies could benefit from a sparse mixture-of-experts (MoE) architecture, allowing specific parameters (referred to as experts) to specialize in distinct antibody features. While MoE architectures are widely adopted and optimized in natural language processing domains, they are less common in biological modeling. To this end, we assess existing MoE routing strategies and find that token-choice routing strategies outperform expert-choice routing, presumably due to their specialization in CDRH3 residues. We further optimized the token-choice router for AbLMs, by minimizing the routing of padding tokens to enable pre-training with varying sequence lengths. Finally, we show that a large-scale baseline antibody language model with a Top-2 MoE architecture (BALM-MoE), trained on a mixture of unpaired and paired antibody sequences, outperforms its dense counterpart with the same number of active parameters.

Matching journals

The top 7 journals account for 50% of the predicted probability mass.

1
Cell Systems
167 papers in training set
Top 0.5%
14.8%
2
PLOS Computational Biology
1633 papers in training set
Top 2%
12.4%
3
Bioinformatics
1061 papers in training set
Top 4%
6.4%
4
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 16%
4.4%
5
Cell Reports
1338 papers in training set
Top 11%
4.3%
6
Nature Computational Science
50 papers in training set
Top 0.1%
4.3%
7
Nature Communications
4913 papers in training set
Top 37%
4.0%
50% of probability mass above
8
eLife
5422 papers in training set
Top 24%
3.7%
9
Frontiers in Immunology
586 papers in training set
Top 2%
3.7%
10
iScience
1063 papers in training set
Top 5%
3.6%
11
Communications Biology
886 papers in training set
Top 4%
2.4%
12
Computational and Structural Biotechnology Journal
216 papers in training set
Top 3%
2.1%
13
PLOS ONE
4510 papers in training set
Top 50%
1.9%
14
Nature Methods
336 papers in training set
Top 4%
1.9%
15
Science
429 papers in training set
Top 13%
1.9%
16
Scientific Reports
3102 papers in training set
Top 58%
1.7%
17
Nucleic Acids Research
1128 papers in training set
Top 11%
1.7%
18
Journal of Molecular Biology
217 papers in training set
Top 2%
1.7%
19
Bioinformatics Advances
184 papers in training set
Top 3%
1.3%
20
Proteins: Structure, Function, and Bioinformatics
82 papers in training set
Top 0.6%
1.2%
21
Genome Medicine
154 papers in training set
Top 6%
1.0%
22
mAbs
28 papers in training set
Top 0.3%
1.0%
23
Journal of Chemical Information and Modeling
207 papers in training set
Top 3%
0.8%
24
BMC Bioinformatics
383 papers in training set
Top 6%
0.8%
25
Nature
575 papers in training set
Top 15%
0.8%
26
Frontiers in Genetics
197 papers in training set
Top 10%
0.8%
27
Nature Machine Intelligence
61 papers in training set
Top 3%
0.8%
28
Briefings in Bioinformatics
326 papers in training set
Top 7%
0.8%
29
Science Advances
1098 papers in training set
Top 35%
0.5%
30
Nature Medicine
117 papers in training set
Top 6%
0.5%