Back

Comparing Brain-Score and ImageNet performance with responses to the scintillating grid illusion

Kraus, M. K.; Verkerk, L.; Keemink, S. W.

2025-06-24 neuroscience
10.1101/2025.06.18.660291 bioRxiv
Show abstract

Perceptual illusions are widely used to study brain processing, and are essential for elucidating underlying function. Successful brain models should then also be able to reproduce these illusions. Some of the most successful models for vision are several variants of Deep Neural Networks (DNNs). These models can classify images with human-level accuracy, and many behavioral and activation measurements correlate well with humans and animals. For several networks it was also shown that they can reproduce some human illusions. However, this was typically done for a limited number of networks. In addition, it remains unclear whether the presence of illusions is linked to either how accurate or brain-like the DNNs are. Here, we consider the scintillating grid illusion, to which two DNNs have been shown to respond as if they are impacted by the illusion. We develop a measure for measuring Illusion Strength based on model activation correlations, which takes into account the difference in Illusion Strength between illusion and control images. We then compare the Illusion Strength to both model performance (top-1 ImageNet), and how well the model explains brain activity (Brain-score). We show that the illusion was measurable in a wide variety of networks (41 out of 51). However, we do not find a strong correlation between Illusion Strength and Brain-Score, nor performance. Some models have strong illusion scores but not Brain-Score, or vice-versa, but no model does both well. Finally, this differs strongly between model types, particularly between convolutional and transformer-based architectures, with transformers having low illusion scores. Overall, our work shows that Illusion Strength measures an important metric to consider for assessing brain models, and that some models could still be missing out on some processing important for brain functioning.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 2%
14.6%
2
Neural Networks
32 papers in training set
Top 0.1%
14.2%
3
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.2%
10.3%
4
Scientific Reports
3102 papers in training set
Top 19%
6.3%
5
Frontiers in Neuroscience
223 papers in training set
Top 0.5%
6.3%
50% of probability mass above
6
Journal of Vision
92 papers in training set
Top 0.2%
3.6%
7
PLOS ONE
4510 papers in training set
Top 40%
3.6%
8
Frontiers in Neuroinformatics
38 papers in training set
Top 0.2%
3.6%
9
Neural Computation
36 papers in training set
Top 0.2%
3.0%
10
Human Brain Mapping
295 papers in training set
Top 2%
2.7%
11
eneuro
389 papers in training set
Top 4%
2.3%
12
Neurocomputing
13 papers in training set
Top 0.2%
1.9%
13
Frontiers in Neural Circuits
36 papers in training set
Top 0.2%
1.9%
14
Neuroinformatics
40 papers in training set
Top 0.4%
1.9%
15
NeuroImage
813 papers in training set
Top 4%
1.8%
16
Nature Communications
4913 papers in training set
Top 52%
1.7%
17
Network Neuroscience
116 papers in training set
Top 0.6%
1.7%
18
Journal of Neural Engineering
197 papers in training set
Top 1%
1.3%
19
Medical Image Analysis
33 papers in training set
Top 0.8%
1.2%
20
Imaging Neuroscience
242 papers in training set
Top 3%
0.8%
21
Brain Structure and Function
83 papers in training set
Top 0.6%
0.7%
22
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 45%
0.7%
23
Communications Biology
886 papers in training set
Top 27%
0.7%
24
iScience
1063 papers in training set
Top 38%
0.6%