Cognition
○ Elsevier BV
All preprints, ranked by how well they match Cognition's content profile, based on 44 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Lopes Rego, A. T.; Snell, J.; Meeter, M.
Show abstract
Although word predictability is commonly considered an important factor in reading, sophisticated accounts of predictability in theories of reading are yet lacking. Computational models of reading traditionally use cloze norming as a proxy of word predictability, but what cloze norms precisely capture remains unclear. This study investigates whether large language models (LLMs) can fill this gap. Contextual predictions are implemented via a novel parallel-graded mechanism, where all predicted words at a given position are pre-activated as a function of contextual certainty, which varies dynamically as text processing unfolds. Through reading simulations with OB1-reader, a cognitive model of word recognition and eye-movement control in reading, we compare the models fit to eye-movement data when using predictability values derived from a cloze task against those derived from LLMs (GPT2 and LLaMA). Root Mean Square Error between simulated and human eye movements indicates that LLM predictability provides a better fit than Cloze. This is the first study to use LLMs to augment a cognitive model of reading with higher-order language processing while proposing a mechanism on the interplay between word predictability and eye movements. Author SummaryReading comprehension is a crucial skill that is highly predictive of later success in education. One aspect of efficient reading is our ability to predict what is coming next in the text based on the current context. Although we know predictions take place during reading, the mechanism through which contextual facilitation affects ocolarmotor behaviour in reading is not yet well-understood. Here, we model this mechanism and test different measures of predictability (computational vs. empirical) by simulating eye movements with a cognitive model of reading. Our results suggest that, when implemented with our novel mechanism, a computational measure of predictability provide better fits to eye movements in reading than a traditional empirical measure. With this model, we scrutinize how predictions about upcoming input affects eye movements in reading, and how computational approches to measuring predictability may support theory testing. In the short term, modelling aspects of reading comprehension helps reconnect theory building and experimentation in reading research. In the longer term, more understanding of reading comprehension may help improve reading pedagogies, diagnoses and treatments.
Andreetta, S.; Soldatkina, O.; Boboeva, V.; Treves, A.
Show abstract
To test the idea that poetic meter emerged as a cognitive schema to aid verbal memory, we have focused on classical Italian poetry and on its three basic components of meter: rhyme, accent and verse length. Meaningless poems were generated by introducing prosody-invariant non-words into passages from Dantes Divina Commedia and Ariostos Orlando Furioso, which were then further manipulated by selectively ablating rhymes, modifying accent patterns or altering the number of syllables. The resulting four versions of each non-poem were presented in a fully balanced design to cohorts of high school educated Italian native speakers, who were then asked to retrieve 3 target non-words. Surprisingly, we found that the integrity of Dantes meter has no significant effect on memory performance. With passages derived from Ariosto, instead, removing each component downgrades memory by an amount proportional to its contribution to perceived metric plausibility, with rhymes having the strongest effects, followed by accents and then by verse length. Counterintuitively, the fully metric versions required longer reaction times, implying that activating metric schemata involves a cognitive cost. Within schema theories, this finding provides evidence for high-level interactions between procedural and episodic memory.
Chang, Y.-N.; Welbourne, S.; Furber, S.; Lambon Ralph, M. A.
Show abstract
Computational modelling has served as a powerful tool to advance our understanding of language processes by making theoretical ideas rigorously specified and testable (a form of " open science" for theory building). In reading research, one of the most influential computational modelling frameworks is the triangle model of reading that characterises the mappings between orthography, phonology and semantics. Currently, most instantiations of the triangle modelling framework start the processes from orthographic levels which abstract away visual processing. Moreover, without visual processing, most models do not provide an opportunity to investigate visual-related dyslexia. To bridge this crucial gap, the present study extended the existing triangle models by implementing an additional visual input. We trained the model to learn to read from visual input without pre-defined orthographic representations. The model was assessed by reading tasks in both intact and after damage (to mimic acquire alexias). The simulation results demonstrated that the model was able to name word and nonwords as well as make lexical decisions. Damage to the visual, phonological or semantic components of the model resulted in the expected reading impairments associated with pure alexia, phonological dyslexia, and surface dyslexia, respectively. The simulation results demonstrated for the first time that both typical and neurologically-impaired reading including both central and peripheral dyslexia could be addressed in this extended triangle model of reading. The findings are consistent with the primary systems account.
Anllo, H.; Watanabe, K.; Sackur, J.; de Gardelle, V.
Show abstract
Verbal hints can bias perceptual decision-making, even when the information they provide is false. Whether individuals may be more or less susceptible to such perceptual influences, however, remains unclear. We asked naive participants to indicate the dominant color in a series of stimuli, after giving them a false statement about which color would likely dominate. As anticipated, this statement biased participants perception of the dominant color, as shown by a correlated shift of their perceptual decisions, confidence judgments and response times. Crucially, this perceptual bias was more pronounced in participants with higher levels of susceptibility to social influence, as measured by a standard suggestibility scale. Together, these results indicate that even without much apparatus, simple verbal hints can affect our perceptual reality, and that social steerability can determine how much they do so. Susceptibility to suggestion might thus be considered an integral part of perceptual processing. Statement of relevanceAt a time when fake news soar, understanding the role that simple verbal descriptions play in how we perceive the world around us is paramount. Extensive research has shown that perception is permeable to well-orchestrated manipulation. Comparatively less attention has been paid to the perceptual impact of false information when the latter is imparted simply and straightforwardly, through short verbal hints and instructions. Here we show that even a single sentence suffices to bias perceptual decision-making, and that critically, this bias varies across individuals as a function of susceptibility to social influence. Considering how here perception was biased by a single, plain sentence, we argue that researchers, communicators and policy-makers should pay careful attention to the role that social suggestibility plays in how we build our perceptual reality.
Lacey, S.; Matthews, K. L.; Nygaard, L. C.; Sathian, K.
Show abstract
Sound symbolism occurs when the sound of a word alone can convey its meaning, e.g. balloon and spike sound rounded and pointed, respectively. Sound-symbolic correspondences are widespread in natural languages, but it is unclear how they are instantiated across different domains of meaning. Here, participants rated auditory pseudowords on opposing scales of seven different sound-symbolic domains: shape (rounded-pointed), texture (hard-soft), weight (light-heavy), size (small-big), brightness (bright-dark), arousal (calming-exciting), and valence (good-bad). Ratings showed cross-domain relationships, some mirroring those between corresponding physical domains, e.g. size and weight ratings were associated, reflecting a physical size-weight relationship, while others involved metaphorical mappings, e.g., bright/dark mapped onto good/bad, respectively. The phonetic features of the pseudowords formed unique sets with characteristic feature weightings for each domain and tended to follow the cross-domain ratings relationships. These results suggest that sound-symbolic correspondences rely on domain-specific patterns of phonetic features, with cross-domain correspondences reflecting physical or metaphorical relationships.
Kissane, H.; Tziridis, K.; Schilling, A.; Krauss, P.; Herbst, T.
Show abstract
This study investigates the cognitive processing of verb-particle constructions (VPCs) using eye-tracking data analysis method to explore how English native speakers process different types of the sequence NP-verb-particle-NP during reading tasks. While previous research has focused on phrasal verbs, our study extends this examination to include patterns with prepositions, aiming to identify distinct cognitive engagement patterns and processing efficiencies associated with each. We employed the Provo Corpus to analyse eye movements while participants read sentences containing these constructions. we focused on metrics such as first fixation duration, gaze duration, go-past times, and total reading times. Our findings indicate similarities in the lexical verbs, and significant differences in particles, indicating how these two types of constructions are processed, with phrasal verbs sometimes processed more efficiently than the prepositional counterparts. This suggests that phrasal verbs might be more deeply entrenched in the linguistic repertoire of native speakers, possibly functioning as single lexical units. This research contributes to the understanding of complex structures processing and the cognitive mechanisms that support it, offering insights that could influence linguistic theory and language education.
Takahashi, T.; Oyo, K.; Tamatsukuri, A.; Higuchi, K.
Show abstract
We view observational causal induction as a statistical independence test under rarity assumption. This paper complements the two-stage theory of causal induction proposed by Hattori and Oaksford (2007) with a computational analysis. We show that their dual-factor heuristic (DFH) model has a rational account as the square root of the index of (non-)independence under extreme rarity assumption, contrary to the criticism that the DFH model is non-normative (e.g., Lu et al., 2008). We introduce a model that considers the proportion of assumed-to-be rare instances (pARIs), which is the probability of biconditionals (according to several theories of compound conditionals) and can be seen as a simplified version of the DFH model. While being a single conditional probability, pARIs approximates the non-independence measure, the square of DFH. In reproducing the meta-analysis in Hattori and Oaksford (2007), we confirm that pARIs and DFH have the same level of descriptive adequacy, and that the two models have the highest fit among more than 40 models. Then, we critically examine the computer simulations which were central to the rational analysis in Hattori and Oaksford (2007). We point out two problems in their simluations: samples in some of the simulations being restricted to generative ones, and in-definite values of models because of the small samples. In the light of especially the latter problem of definability, pARIs shows higher applicability.
Cavdan, M.; Drewing, K.; Doerschner, K.
Show abstract
The softness of objects can be perceived through several senses. For instance, to judge the softness of our cats fur, we do not only look at it, we also run our fingers in idiosyncratic ways through its coat. Recently, we have shown that haptically perceived softness covaries with the compliance, viscosity, granularity, and furriness of materials (Dovencioglu et al.,2020). However, it is unknown whether vision can provide similar information about the various aspects of perceived softness. Here, we investigated this question in an experiment with three conditions: in the haptic condition, blindfolded participants explored materials with their hands, in the visual-static condition participants were presented with close-up photographs of the same materials, and in the visual-dynamic condition participants watched videos of the hand-material interactions that were recorded in the haptic condition. After haptically or visually exploring the materials participants rated them on various attributes. Our results show a high overall perceptual correspondence between the three experimental conditions. With a few exceptions, this correspondence tended to be strongest between haptic and visual-dynamic conditions. These results are discussed with respect to information potentially available through the senses, or through prior experience, when judging the softness of materials.
Clarke, J.; Rittershofer, K.; Ward, E. K.; Yon, D.; Press, C.
Show abstract
Over the past two decades, converging evidence from neuroscience and psychology has shown that predictions based on learnt statistical regularities exert a widespread influence on perception, action and cognition. Predictive processes in cognition and the brain are usually modelled as tracking objective event probabilities, deriving predictions and prediction errors from the statistical structure of the environment. However, our subjective models of our environments do not always align with these objective statistics. Currently we know little about how these subjective representations may determine the predictive functions. To separate subjective and objective contributions to prediction, we conducted three studies where cues (actions or tones) predicted visual outcomes (shapes or Gabors) with varying contingencies, and adult participants discriminated these outcomes. Uniquely to our paradigm, participants also reported their experiences of the statistical structure embedded in the task - the subjective probability (Experiment 1; N = 68), expectedness (Experiment 2; N = 35), or surprise (Experiment 3; N = 35) associated with the outcomes. When modelling subjective ratings alongside objective structure, the speed of perceptual decisions was best explained by independent, additive contributions of both. The decision itself was usually only explained by the subjective ratings, with little additional variance explained by objective statistical structure. These findings suggest that subjective experience may play a key, overlooked role in predictive processes, and open a host of interesting questions about the relative objective and subjective contributions to prediction, perception, and learning.
Arun, I.; Lazar, L.
Show abstract
The influence of language on perceptual processes, referred to as the Whorfian hypothesis, has been a contentious issue. Cross-linguistic research and lab-based experiments have shown that verbal labels can facilitate perceptual and discriminatory processes, mostly in visual and auditory modalities. Here, we investigated whether verbal labels improve performance in a tactile texture discrimination task using natural textures. We also explored whether the grammatical category of these verbal labels plays a role in discrimination ability. In our experiments, we asked the participants to discriminate between pairs of textures presented to the fingertip after a five-day training phase. During the training phase, the tactile textures and English pseudowords were co-presented consistently in the congruent (experimental) condition and inconsistently in the incongruent (control) condition, allowing them to form implicit associations only in the former condition. The pseudoword verbal labels belonged to two grammatical categories, verb-like and noun-like. We found an improvement in the texture discrimination ability only for the congruent condition, irrespective of the grammatical category.
Agrawal, A.; Nag, S.; Hari, K. V. S.; Arun, S.
Show abstract
Fluent reading is an important milestone in education, but we lack a clear understanding of why children vary so widely in attaining this milestone. Language-related factors such as rapid automatized naming (RAN) and phonological awareness have been identified as important factors that influence reading fluency. Of theoretical interest is also, however, whether aspects of visual processing influence reading fluency. To investigate this issue, we tested primary school children (n = 68) on four tasks: two reading fluency tasks (word reading and passage reading), a RAN task to measure naming speed, and a visual search task using letters and bigrams to measure visual processing. As expected, the RAN score was strongly correlated with reading fluency. In addition, visual processing of bigrams was correlated with reading fluency. Importantly, this association was specific to upright but not inverted bigrams, and to bigrams with normal but not large letter spacing. Thus, reading fluency in children is accompanied by specialized changes in upright bigram processing. We propose that bigram processing during visual search could complement existing measures of language processing to understand individual differences in reading fluency.
Erfanian, M.; Meunier, L.; Gajewski, J.-F.
Show abstract
Cognitive overload can impair professional scepticism in high-stakes contexts such as auditing. In these settings, sustaining professional scepticism is essential. Default nudges, or pre-selected options, may offset these effects by reducing cognitive demands. We conducted two online experiments to examine how cognitive load and default nudges influence professional scepticism in auditing decisions. Experiment 1 validated a dot memory task manipulation of cognitive load and identified low and high load conditions for subsequent testing. Experiment 2 embedded this manipulation in Phillips audit task, used for measuring professional scepticism in audit. Results showed that cognitive load slowed responses and reduced accuracy. Default nudges accelerated responding and improved accuracy under load, but only when aligned with the most probable response; misaligned nudges reduced accuracy. These findings suggest that defaults act as conditional scaffolds under cognitive strain, supporting judgment and decision-making in some contexts but introducing risks in others. Misaligned defaults reduced accuracy, indicating that they can exploit intuitive responding rather than enhance deliberation.
Wallenberg, J. C.; Hinton, T. D.; Smulders, T. V.; Read, J. C. A.; Cuskley, C.; Fadhilah, S. N.
Show abstract
This study builds on work on language processing and information theory which suggests that informationally uniform, or smoother, sequences are easier to process than ones in which information arrives in clumps. Because episodic memory is a form of memory in which information is encoded within its surrounding context, we predicted that episodic memory in particular would be sensitive to information distribution. We used the "dual process" theory of recognition memory to separate the episodic memory component (recollection) from the non-episodic component (familiarity) of recognition memory. Though we find a weak effect in the predicted direction, this does not reach statistical significance and so the study does not support the hypothesis. The study does replicate a known effect from the literature where low frequency words are more easily recognized than high frequency ones when participants employ recollection-type memory. We suggest our results may be explained by linguistic processing being particularly adapted to processing linear sequences of information in a way that episodic memory is not. Episodic memory likely evolved to deal with unpredictable, sometimes clumped, information streams.
Lacey, S.; Matthews, K. L.; Hoffmann, A. M.; Sathian, K.; Nygaard, L. C.
Show abstract
Sound symbolism, the idea that the sound of a word alone can convey its meaning, is often studied using auditory pseudowords. For example, people reliably assign the auditory pseudowords "bouba" and "kiki" to rounded and pointed shapes, respectively. Previously we showed that representational dissimilarity matrices (RDMs) of the shape ratings of auditory pseudowords correlated significantly with RDMs of acoustic parameters reflecting spectro-temporal variations; the ratings also correlated significantly with voice quality features. Here, participants rated auditory pseudowords on scales representing categorical opposites across seven meaning domains, including shape. Examination of the relationships of the perceptual ratings to spectro-temporal and vocal parameters of the pseudowords essentially replicated our previous findings for shape while varying patterns emerged for the other domains. Thus, the spectro-temporal and vocal properties of spoken pseudowords contribute differentially to sound-symbolic mapping depending on the meaning domain.
Menashe, B.; Drake, A.; Ben-Shachar, M.
Show abstract
Mirative markers, such as "surprisingly", explicitly encode a violation of expectations. Such markers are used for expectation management during communication. Sensitivity to mirative markers relies on two abilities: i) updating expectations upon recognizing a mirative marker, and ii) identifying expectation violations warranting the use of a mirative marker. In this study, we compared sensitivity to mirative markers in humans and large language models (LLMs). In part 1, we used a sentence-completion task, where humans and LLMs were presented with sentence fragments and asked to continue them. Results show that for both humans and LLMs, the presence of a mirative marker significantly increased response entropy and decreased top-response probability, in line with theoretical accounts of mirative processing. In part 2, we created a novel task, mirative polarity selection, where humans and LLMs are presented with a sentence pair and asked to select whether it was connected by a mirative marker ("surprisingly") or an anti-mirative marker ("unsurprisingly"). Results show that LLMs perform at an impressive human-equivalent level. We conclude that both humans and LLMs use mirative markers as cues for calibrating their subsequent expectations during sentence processing.
Mitterer, H.; Arunkumar, M.; van Paridon, J.; Huettig, F.
Show abstract
How do different levels of representation interact in the mind? Key evidence for answering this question comes from experimental work that investigates the influence of knowledge of written language on spoken language processing. Here we tested whether learning orthographic representations (through reading) influences pre-lexical phonological representations in spoken-word recognition using a perceptual learning paradigm. Perceptual learning is well suited to reveal differences in pre-lexical representations that might be caused by learning to read because it requires the functional use of pre-lexical representations in order to generalize a learning experience. In a large-scale behavioural study in Chennai, India, 97 native speakers of Tamil with varying reading experience (from completely illiterate to highly literate) participated. In marked contrast to their performance in other cognitive tasks, even completely illiterate participants showed a perceptual learning effect that was not moderated by reading experience. This finding suggests that pre-lexical phonological representations are not substantially changed by learning to read and thus poses important constraints for the debate about the degree of interactivity between different levels of representations during human information processing.
Heilbron, M.; van Haren, J.; Hagoort, P.; de Lange, F. P.
Show abstract
In a typical text, readers look much longer at some words than at others and fixate some words multiple times, while skipping others altogether. Historically, researchers explained this variation via low-level visual or oculomotor factors, but today it is primarily explained in terms of cognitive factors, such as how well word identity can be predicted from context or discerned from parafoveal preview. While the existence of these effects has been well established in experiments, the relative importance of prediction, preview and low-level factors for eye movement variation in natural reading is unclear. Here, we address this question in three large datasets (n=104, 1.5 million words), using a deep neural network and Bayesian ideal observer to model linguistic prediction and parafoveal preview from moment to moment in natural reading. Strikingly, neither prediction nor preview was important for explaining word skipping - the vast majority of skipping was explained by a simple oculomotor model. For reading times, by contrast, we found strong but independent contributions of both prediction and preview, with effect sizes matching those from controlled experiments. Together, these results challenge dominant models of eye movements in reading by showing that linguistic prediction and parafoveal preview are not important determinants of word skipping.
Kiai, A.; Melloni, L.
Show abstract
Statistical learning (SL) allows individuals to rapidly detect regularities in the sensory environment. We replicated previous findings showing that adult participants become sensitive to the implicit structure in a continuous speech stream of repeating tri-syllabic pseudowords within minutes, as measured by standard tests in the SL literature: a target detection task and a 2AFC word recognition task. Consistent with previous findings, we found only a weak correlation between these two measures of learning, leading us to question whether there is overlap between the information captured by these two tasks. Representational similarity analysis on reaction times measured during the target detection task revealed that reaction time data reflect sensitivity to transitional probability, triplet position, word grouping, and duplet pairings of syllables. However, individual performance on the word recognition task was not predicted by similarity measures derived for any of these four features. We conclude that online detection tasks provide richer and multi-faceted information about the SL process, as compared with 2AFC recognition tasks, and may be preferable for gaining insight into the dynamic aspects of SL.
Lubineau, M.; Potier-Watkins, C.; Glasel, H.; Dehaene, S.
Show abstract
PurposeWhich processes induce variations in reading speed in young readers with the same amount of education, but different levels of reading fluency? Here, we tested a prediction of the dual-route model: as fluency increases, these variations may reflect a decreasing reliance on decoding and an increasing reliance on the lexical route. Method1,500 French 6th graders passed a one-minute speeded reading-aloud task evaluating fluency, and a 10-minute computerized lexical decision task evaluating the impact of word length, word frequency and pseudoword type. ResultsAs predicted, the word length effect varied dramatically with reading fluency, with the least fluent group showing a length effect even for frequent words. The frequency effect also varied, but solely in proportion to overall slowness, suggesting that frequency affects the decision stage in all readers, while length impacts poor readers disproportionately. Response times and errors were also affected by pseudoword type (e.g. letter substitutions or transpositions), but these effects did not vary much with fluency. Overall, lexical decision variables were excellent predictors of reading fluency (r=0.62). ConclusionOur results call attention to middle-school reading difficulties and encourage the use of lexical decision as a test of students mental lexicon and the automatization of reading.
Malik, A.; Yu, Y.; Boyaci, H.; Doerschner, K.
Show abstract
While research on the perception of line drawings has long demonstrated the importance of contours in object recognition, recent work shows that contours can also convey material properties. For example, even simple 2D shapes with varying contours have been shown to evoke vivid impressions of different materials (Pinna & Deiana, 2015). However, such static representations capture only a single moment in time. When a material moves, its contours shift, evolve, or deform over time, creating contour motion. Does this contour motion convey diagnostic information about material properties, independent of surface appearance? Existing studies on the role of dynamic cues in material perception either use fully rendered 3D stimuli, where contour motion is confounded with rich surface information, or motion-only displays (dynamic dot stimuli or noise patches), which eliminate surface cues but also lack clearly defined contours. As a result, the relative contribution of contour motion to material perception remains unclear. To address this gap, we measured how human observers perceive materials from dynamic line drawings ("line"), compared to animations of fully textured stimuli that carry optical and motion information ("full"), as well as dynamic dot stimuli ("dot"). Stimuli were rendered versions (full, dot, line) of material animations from five material categories (jelly, liquid, smoke, fabric, and rigid-breakable). In one experiment, participants rated five material attributes (dense, flexible, wobbly, fluid, airy motion), and in a second experiment, participants were asked to choose one of the two materials that is more similar to a third material across all possible combinations. Results from both experiments consistently reveal that 1) Dynamic line drawings vividly convey mechanical material properties, and 2) the similarity in material judgments between line and full conditions was larger than that between dot and full conditions. We conclude that contour motion carries rich information about mechanical material qualities.