GenePT Revisited: Do Better Text Embeddings Make Better Gene Embeddings?
Hedley, J. G.; Torr, P. H. S.; Märtens, K.
Show abstract
AO_SCPLOWBSTRACTC_SCPLOWGenePT introduced a simple recipe for gene representations: embed each genes natural-language description with a general-purpose text embedding model and reuse the resulting vectors across downstream tasks. Since GenePTs release, embedding models have improved rapidly, with many strong open and commercial encoders benchmarked on suites such as the Massive Text Embedding Benchmark (MTEB). We present a controlled "leaderboard" study that keeps the GenePT pipeline fixed and varies only the embedding backbone. We benchmark contemporary encoders on four diverse gene embedding tasks: gene-gene interaction prediction, gene property classification, cell type classification, and prediction of transcriptomic responses to unseen genetic perturbations. Across these settings, newer backbones consistently outperform the original GenePT backbone (text-embedding-ada-002), achieving improvements of 1-17%, while enabling fully reproducible research by avoiding API dependencies.
Matching journals
The top 7 journals account for 50% of the predicted probability mass.