Back

Grounding olfactory perception in language: Benchmarks and models for generating natural language odor descriptions

Mascart, C.; Tran, K.; Samoilova, K.; Storan, L. T.; Liu, T.; Koulakov, A.

2026-03-05 animal behavior and cognition
10.64898/2026.03.04.709650 bioRxiv
Show abstract

Recent advances in deep learning have enabled prediction of odorant perception from molecular structure, opening new avenues for odor classification. However, most existing models are limited to predicting percepts from fixed vocabularies and fail to capture the full richness of olfactory experience. Progress is further limited by the scarcity of large-scale olfactory datasets and the lack of standardized metrics for evaluating free-form natural-language odor descriptions. To address these challenges, we introduce Odor Description and Inference Evaluation Understudy (ODIEU), a benchmark which includes perceptual descriptions of over 10,000 molecules paired with a model-based metric for evaluating free-form odor text descriptions. The model-based metric uses Sentence-BERT (SBERT) models which are fine-tuned on olfactory descriptions to allow better evaluation of human-generated odor texts. Using the fine-tuned SBERT models, we show that free-form text odor descriptions contain additional perceptual information in their syntactic structure compared to semantic labels. We further introduce CIRANO (Chemical Information Recognition and Annotation Network for Odors), a transformer-based model that generates free-form odor descriptions directly from molecular structure, thus implementing the molecular structure-to-text (S2T) prediction. CIRANO achieves performance comparable to humans. Finally, we generate human-like descriptions from mouse olfactory bulb neural data using an invertible SBERT model, yielding neural-to-text (N2T) predictions highly aligned with human descriptions. Together, CIRANO and ODIEU establish a standardized framework for generating natural language olfactory descriptions and evaluating their alignment with human perception. Code is available at https://github.com/KoulakovLab/ODIEU

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
Nature Machine Intelligence
61 papers in training set
Top 0.1%
27.8%
2
Nature Methods
336 papers in training set
Top 1%
7.2%
3
Science
429 papers in training set
Top 5%
6.4%
4
Nature Communications
4913 papers in training set
Top 28%
6.4%
5
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 11%
6.4%
50% of probability mass above
6
Nature
575 papers in training set
Top 6%
4.3%
7
PLOS Computational Biology
1633 papers in training set
Top 8%
4.0%
8
iScience
1063 papers in training set
Top 5%
3.6%
9
Nature Human Behaviour
85 papers in training set
Top 0.9%
3.6%
10
Cell Reports Methods
141 papers in training set
Top 0.8%
3.6%
11
Scientific Reports
3102 papers in training set
Top 45%
2.6%
12
Nature Neuroscience
216 papers in training set
Top 3%
2.1%
13
eLife
5422 papers in training set
Top 41%
1.7%
14
Nature Biomedical Engineering
42 papers in training set
Top 0.8%
1.7%
15
PLOS ONE
4510 papers in training set
Top 57%
1.5%
16
Cell Reports
1338 papers in training set
Top 28%
1.2%
17
Science Advances
1098 papers in training set
Top 28%
0.8%
18
npj Digital Medicine
97 papers in training set
Top 3%
0.8%
19
Nature Ecology & Evolution
113 papers in training set
Top 4%
0.8%
20
GigaScience
172 papers in training set
Top 4%
0.6%
21
eneuro
389 papers in training set
Top 10%
0.6%
22
Bioengineering
24 papers in training set
Top 2%
0.6%
23
Genome Research
409 papers in training set
Top 5%
0.6%
24
Cell Research
49 papers in training set
Top 3%
0.6%
25
Computers in Biology and Medicine
120 papers in training set
Top 5%
0.6%