Back

MESSI: Multimodal Experiments with SyStematic Interrogation using nextflow

Liang, C.; Grewal, T.; Singh, A.; Singh, A.

2026-03-11 bioinformatics
10.64898/2026.03.09.710452 bioRxiv
Show abstract

BackgroundMultimodal biomedical studies increasingly profile multiple molecular and clinical modalities from the same samples, creating new opportunities for disease prediction and biological discovery. However, benchmarking multimodal integration methods remains difficult because studies often use inconsistent preprocessing, unequal tuning strategies, and non-comparable evaluation schemes, limiting fair assessment across methods. ResultsWe developed MESSI (Multimodal Experiments with SyStematic Interrogation), a reproducible Nextflow-based benchmarking framework for multimodal outcome prediction that standardizes data preparation, supports interoperable R and Python workflows, and enforces leakage-free nested cross-validation for model selection and model assessment. MESSI currently implements representative intermediate- and late-integration methods and supports bulk multiomics, bulk multimodal, and single-cell multiomics datasets. In simulation studies with known ground truth, most methods were well calibrated in the absence of signal and achieved high performance under strong signal, whereas differences emerged under weaker signal and in feature recovery. We then applied MESSI to 19 real datasets spanning cancer, neurodevelopmental, neurodegenerative, infectious, renal, transplant, and metastatic disease settings, with diverse modality combinations including transcriptomic, epigenomic, proteomic, imaging, electrical, clinical, and single-cell-derived features. Across bulk multimodal datasets, classification differences were generally modest, although DIABLO and multiview cooperative learning tended to rank highest, while MOFA+glmnet and MOGONET were weaker overall. Biological enrichment analyses revealed clearer differences: DIABLO, RGCCA, MOFA, and IntegrAO more consistently recovered significant Reactome, oncogenic, and tissue-relevant gene signatures. In single-cell multiomics benchmarks, method rankings were more dataset dependent, but DIABLO performed consistently well across all case studies, while RGCCA also showed strong performance in specific settings. Computational analyses further showed that DIABLO and MOFA had the most favorable runtime and memory profiles, whereas multiview was the most time-intensive and IntegrAO the most memory-demanding. ConclusionsMESSI provides a reproducible, extensible, and equitable framework for benchmarking multimodal integration methods under a common model assessment strategy. Our results indicate that no single method is uniformly optimal across datasets and objectives; instead, method choice should balance predictive performance, biological interpretability, and computational efficiency. MESSI establishes a foundation for transparent benchmarking and future extensions to broader multimodal learning tasks.

Matching journals

The top 7 journals account for 50% of the predicted probability mass.

1
Patterns
70 papers in training set
Top 0.1%
12.4%
2
Bioinformatics
1061 papers in training set
Top 2%
12.1%
3
Briefings in Bioinformatics
326 papers in training set
Top 0.8%
6.7%
4
BMC Bioinformatics
383 papers in training set
Top 2%
6.2%
5
GigaScience
172 papers in training set
Top 0.2%
6.2%
6
Genome Medicine
154 papers in training set
Top 1%
4.8%
7
PLOS Computational Biology
1633 papers in training set
Top 7%
4.8%
50% of probability mass above
8
Nature Communications
4913 papers in training set
Top 38%
3.9%
9
Bioinformatics Advances
184 papers in training set
Top 1%
3.5%
10
Nature Machine Intelligence
61 papers in training set
Top 1%
3.0%
11
PLOS ONE
4510 papers in training set
Top 49%
2.0%
12
Genome Biology
555 papers in training set
Top 4%
2.0%
13
Computational and Structural Biotechnology Journal
216 papers in training set
Top 4%
1.7%
14
NAR Genomics and Bioinformatics
214 papers in training set
Top 2%
1.7%
15
Nature Methods
336 papers in training set
Top 5%
1.5%
16
Nucleic Acids Research
1128 papers in training set
Top 12%
1.5%
17
Scientific Reports
3102 papers in training set
Top 63%
1.5%
18
npj Systems Biology and Applications
99 papers in training set
Top 1%
1.3%
19
Cell Reports Methods
141 papers in training set
Top 3%
1.2%
20
Cell Reports Medicine
140 papers in training set
Top 5%
1.2%
21
npj Digital Medicine
97 papers in training set
Top 3%
1.1%
22
The Lancet Digital Health
25 papers in training set
Top 0.9%
0.9%
23
Nature Biotechnology
147 papers in training set
Top 8%
0.7%
24
Nature Biomedical Engineering
42 papers in training set
Top 2%
0.7%
25
Advanced Science
249 papers in training set
Top 21%
0.7%
26
Genome Research
409 papers in training set
Top 5%
0.6%
27
Communications Biology
886 papers in training set
Top 30%
0.6%
28
Cancer Discovery
61 papers in training set
Top 2%
0.6%
29
Communications Medicine
85 papers in training set
Top 2%
0.6%
30
Molecular Systems Biology
142 papers in training set
Top 2%
0.6%