top of page

Research &
Summaries of Papers


Proactive distractor suppression in early visual cortex

Richter, van Moorselaar, and Theeuwes (2023)

bioRxiv (preprint)

Avoiding distraction by salient yet irrelevant stimuli is critical when accomplishing daily tasks. One possible mechanism to accomplish this is by suppressing stimuli that may be distracting such that they no longer compete for attention. Prior behavioural studies on distractor suppression have shown gradients of suppression around locations which frequently contain distracting stimuli, thereby reducing the impact of salient distractions on visual search. As can be seen in the figure below, from Wang & Theeuwes (2018), the RT cost of distractors decreases the closer a salient distractor was shown to the “High probability distractor location” (dist-0; i.e. the location which frequently contained the distractor).


While the behavioral benefits of distractor suppression are well-established, its neural underpinnings are not yet fully understood. In an fMRI study, we examined whether and how sensory responses in early visual areas show signs of distractor suppression after incidental learning of spatial statistical regularities. Participants were exposed to an additional singleton task where, unbeknownst to them, one location more frequently contained a salient distractor. Targets appeared equally often at all locations.


We then analyzed whether visual responses in terms of fMRI BOLD were modulated by this distractor predictability. Specifically, we first determined receptive field specific ROI masks in early visual cortex (EVC) using an independent location localizer. We then extracted the BOLD response at the locations of interest during visual search, as well as during omission trials, where search was expected but not search display was actually presented. 


Our findings indicate that implicit spatial priors shape sensory processing even at the earliest stages of cortical visual processing, evident in early visual cortex as a suppression of stimuli at locations which frequently contained distracting information. Notably, while this suppression was spatially (receptive field) specific, it did extend to nearby neutral locations, and occurred regardless of whether the distractor, a nontarget item or the target was presented at this location, suggesting that suppression arises before stimulus identification (panel A below).


Crucially, we observed a similar pattern of spatially specific neural suppression even if search was only anticipated, but no search display was presented (panel B above). Our results thus highlight proactive modulations in early visual cortex, where potential distractions are suppressed preemptively, before stimulus onset, based on learned expectations. Combined, our study underscores how the brain leverages implicitly learned prior knowledge to optimize sensory processing and attention allocation.


If you are interested in the full story, please check out our preprint on bioRxiv:


High-level prediction errors in low-level visual cortex

Richter, Kietzmann, and de Lange (2023)

bioRxiv (preprint)


[Detailed summary to follow. For now please take a look at the twitter summary or the abstract below]

Perception and behaviour are significantly moulded by expectations derived from our prior knowledge. Hierarchical predictive processing theories provide a principled account of the neural mechanisms underpinning these processes, casting perception as a hierarchical inference process. While numerous studies have shown stronger neural activity for surprising inputs, in line with this account, it is unclear what predictions are made across the cortical hierarchy, and therefore what kind of surprise drives this upregulation of activity. Here we leveraged fMRI and visual dissimilarity metrics derived from a deep neural network to arbitrate between two hypotheses: prediction errors may signal a local mismatch between input and expectation at each level of the cortical hierarchy, or prediction errors may incorporate feedback signals and thereby inherit complex tuning properties from higher areas. Our results are in line with this second hypothesis. Prediction errors in both low- and high-level visual cortex primarily scaled with high-level, but not low-level, visual surprise. This scaling with high-level surprise in early visual cortex strongly diverges from feedforward tuning, indicating a shift induced by predictive contexts. Mechanistically, our results suggest that high-level predictions may help constrain perceptual interpretations in earlier areas thereby aiding perceptual inference. Combined, our results elucidate the feature tuning of visual prediction errors and bolster a core hypothesis of hierarchical predictive processing theories, that predictions are relayed top-down to facilitate perception.


Conceptual associations generate sensory predictions

Yan, de Lange, and Richter (2023)


The Journal of Neuroscience

Predictive processing theories suggest that a core computation that the brain performs is to use priors to predict bottom-up inputs resulting in prediction errors. In line with this, studies using visual statistical learning demonstrated larger sensory responses to surprising compared to predictable inputs, possibly reflecting larger prediction errors for surprising inputs.


However, our prior knowledge about the world extends beyond simple visual statistical regularities and includes more abstract, conceptual associations than those probed by visual statistical learning paradigms. This raises a question. For predictions derived from more abstract, conceptual knowledge, do we expect to find sensory prediction errors in the visual system? Or are predictions more constrained, involving modulations only within the domain where priors have been acquired?

Here we exposed participants to word-word pairs where the first word probabilistically predicted the identity of the second word. During a subsequent MRI session, we replaced the second words with corresponding exemplar images.



Crucially, the words were not predictive of the images, hence any expectations our participants had about the object images must have been generalized from the word-word pairs to the word-image pairs by virtue of the words and images conceptually referring to the same object.


We found increased sensory responses to unexpected compared to expected object images throughout the ventral visual stream, even all the way down to early visual cortex.



We also show that the prediction-induced modulations were selectively present for preferred, but absent for non-preferred stimulus categories. This selectivity suggests that these are “true” sensory prediction errors, and not merely modulations due to global surprisal or arousal.



Combined our results demonstrate that predictions and the resulting prediction errors extend beyond simple sensory regularities and generalize across domains, such as using recently acquired conceptual associations to generate feature specific sensory predictions.

Does this mean that conceptual predictions lead to a pre-activation of all possible visual feature configurations that could make up the expected object category? No, we do not think so – at least not in early visual cortex.

If you are interested in how we further interpret the results take a look at our paper!


Dampened sensory representations for expected input across the ventral visual stream

Richter, Heilbron, and de Lange (2022)

Oxford Open Neuroscience

The perceptual system faces at least two challenges: to represent the world as quickly and accurately as possible, and to promote processing of novel information. Relying on prior knowledge to guide perception may help to meet both challenges.

One well-established neural consequence of using prior knowledge in perception is expectation suppression: the attenuation of sensory responses to expected compared to unexpected stimuli.


Fig from: Richter & de Lange, eLife 2019


However, it remains unclear what neural mechanism underlies this phenomenon. Population sharpening models propose that expectations preferentially suppress neurons tuned away from the expected stimulus. This process would bias perception in line with our expectations. Dampening (or cancellation) models argue that expectations preferentially suppress neurons tuned towards expected stimuli. This would reduce redundancy in the sensory stream, while at the same time favoring processing of novel or surprising information.


Fig from: Richter 2021


Here we used forward models, building on work by Alink et al. (2018), to elucidate the neural mechanism underlying expectation suppression in the visual system. First, we analyzed fMRI data of two previous studies, which manipulated perceptual expectations using visual statistical learning. For details see: Richter & de Lange (2019) and Richter et al. (2018).


The neural effects of perceptual expectations were characterized, across the ventral visual stream (V1, LOC, FG), in terms of seven commonly used fMRI outcome metrics, both univariate and multivariate.



Next, we used forward models to test which mechanism best explained the fMRI results. We defined dampening as a local feature-specific gain modulation, in which the gain of neural populations tuned towards the expected stimulus features is reduced. We defined population sharpening as a remote feature-specific gain modulation, in which the gain of neural populations tuned away from the expected stimulus features is reduced. Moreover, we modeled feature-unspecific effects as a global gain modulation.



Results showed that perceptual expectations are best modeled by a feature-specific local gain modulation of neural responses, suppressing activity particularly for neural populations tuned towards the expected stimulus features.




This dampening of neural responses suggests that perceptual expectations, derived from statistical regularities, may reduce information redundancy and bias information processing towards surprising, novel information.

The paper is available in the journal Oxford Open Neuroscience (open access).


Prediction throughout visual cortex:
How statistical regularities shape sensory processing

Richter (2021)

Doctoral Thesis


When we look at the world, we employ our prior knowledge to understand what we see. Typically, we only become aware of this process when our predictions turn out to be incorrect. Consider, for example, the situation where you turn a corner and almost bump into someone. The surprise and startle you experience results from a predictive error. In my dissertation, I explored how the brain uses predictions to guide our visual perception. A simple illustration of this can be seen in Figure 1 below. At first glance, you likely perceive only disjointed shapes and lines. Now, proceed to the next figure below that (Figure 2) and then return to re-examine Figure 1.

Upon recognizing the image of a cat in Figure 1, your perception likely shifts drastically: from a jumble of lines and shapes to recognizing a cat. The image itself hasn’t changed, only your prior knowledge has, subsequently altering your conscious perception. This experience suggests that our knowledge shapes our visual interpretation.


Figure 1. Illustration of the impact of knowledge on perception. Initially, the image appears to consist of random shapes. However, after moving to the next image and viewing Figure 2, the hidden image of a cat becomes apparent. Once aware of the image's content, one's perception of random shapes transforms into that of a cat, even though the image remains unchanged.

In Chapter 1, I introduce the central question of my dissertation: how does prior knowledge influence our perception? From the above example, it's evident that knowledge can indeed alter what we see. However, for most of our lives, we remain unaware that we're combining prior knowledge with sensory information to form our perceptions. The process seems to be one of unconscious perceptual inference—meaning the brain carries out this operation automatically. I investigated how the brain acquires and then uses prior knowledge to anticipate sensory input through magnetic resonance imaging (MRI) in chapters 2-5 of my dissertation.


Figure 2. The original image from Figure 1. On revisiting Figure 1, the seemingly random shapes and lines might transform based on the new understanding gleaned from this original image.


In Chapter 2, I studied how humans predict incoming sensory information and how these predictions modulate neural processes. I presented participants with pairs of images. Unbeknownst to them, certain images predicted others, meaning seeing Image A would increase the likelihood of Image B appearing next. Using these statistical regularities, I discerned how the brain reacts to expected versus unexpected images. Do correct predictions about Image B alter its processing in the sensory brain? My results suggest so. The sensory brain responds less vigorously and distinctly to correctly predicted images, even without any conscious intention to predict. In other words, our brain seems to anticipate sensory input based on learned statistical regularities.

In Chapter 3, I probed how automatic these predictions are. Earlier studies, including my Chapter 2 findings, implied that we continuously and automatically predict, even without intending to. Yet, I wanted to determine the role of attention in this process. To test this, I presented participants with image pairs again, employing statistical regularities. Participants were instructed to either focus on the images or on letters displayed directly above the images. Surprisingly, when participants' attention diverted from the image, all predictive effects vanished. The brain now reacted similarly to expected and unexpected images. But when attention refocused on the images, neural reactions to unexpected images were again more pronounced. My results suggest that while our brain seemingly predicts visual input automatically to some degree (probably without explicit intention to do so), attention to the predictable input seems nonetheless crucial.

In Chapter 4, I further chart the diminished neural response for expected versus unexpected input. One possibility is that the brain minimizes neural responses to correctly predicted images by reducing the response of neurons reacting vigorously to the anticipated image. An expected image is less informative because it is predictable. Hence, it makes sense for the brain to conserve energy by not reacting as vigorously to anticipated sensory input, effectively "dampening" the response. Another theory suggests that while the brain still reacts strongly to correctly predicted image features, the overall response is more specific and contains less noise, resulting in a lowered average neural response. Using computational models and data from Chapters 2 and 3, I conclude that correctly anticipated sensory inputs appear to be 'dampened'. This implies the brain seems to reduce anticipated information, perhaps because such sensory input is less informative. Thus, our attention is directed towards potentially significant unexpected sensory information.

Chapter 5 delves into the limits of learning statistical regularities. Previous chapters demonstrated that people can effortlessly and incidentally grasp statistical patterns. I then explored whether this learning process transcends various senses. Instead of showing image pairs to participants, I introduced sounds predicting images, requiring participants to use information from two different senses. Astonishingly, participants did not seem to learn these audio-visual regularities: neither behavioral nor neural evidence indicated statistical learning effects. This implies the statistical learning mechanism discussed in Chapters 2-4 might depend on neural mechanisms within a specific modality (e.g., changes in the visual cortex).


Chapter 6 synthesizes the findings from preceding chapters into a cohesive narrative. At first, you likely saw random lines in Figure 1. Yet, after viewing Figure 2, your perception of Figure 1 altered. This change demonstrates that our prior knowledge seemingly has a direct impact on our perceptions. My research illustrates how the brain employs prior knowledge to guide perception. It seems exceptionally proficient at detecting and learning sensory world's statistical structures, even without any intent to learn. Once acquired, this knowledge is utilized by the brain to predict sensory input, with discrepancies leading to prediction errors. For example, almost running into someone after turning a corner. In the sensory brain, such a predictive error is evident by a more robust and clearer neural reaction. My findings also suggest that the brain might use predictions to dampen reactions to predicted sensory input, perhaps because correctly predicted sensory information contains less new information than unexpected sensory input. In conclusion, prediction appears to be a fundamental facet of sensory information processing.


If you find this work interesting, chapter 1 of my thesis contains a more elaborate introduction and chapter 6 a discussion/summary of the results.


Statistical learning attenuates visual activity only for attended stimuli

Richter and de Lange (2019)



Statistical learning describes our ability to acquire and utilize statistical regularities in the environment. Previous research shows that statistical learning can occur in different contexts and modalities. In fact, learning may even occur without explicit awareness of any regularities, suggesting that statistical learning is a fairly automatic process.



In this fMRI study, we investigated the automaticity of the sensory effects of statistical learning. In particular, we show that after statistical learning, sensory responses to expected stimuli are attenuated relative to the response to unexpected stimuli throughout the ventral visual stream. However, this sensory attenuation depended on participants attending the predictable stimuli and creased when attention was drawn away to a competing stimulus, even though the stimuli were nonetheless processed by the visual system.



These results show that attention gates the sensory effects of statistical learning, suggesting that predictive processes in the visual system are not necessarily automatic, but may depend on attending the predictable stimuli.

Take a look at our paper (open access on eLife) to see more interesting results, suggesting that this sensory suppression may partially reflect global surprise, as well as an in-depth discussion of our results.


Suppressed sensory response to predictable object stimuli throughout the ventral visual stream

Richter, Ekman, and de Lange (2018)

The Journal of Neuroscience


Our brains do not just passively record the environment, but rather actively predict upcoming stimuli. In our recent fMRI study, we investigated how neural responses to everyday objects change depending on whether they are expected or unexpected.

Participants saw two object images on each trial and pressed a button whenever an image was turned upside-down. Whether or not an image was flipped upside-down was completely random. However, unbeknownst to our participants the first image on each trial predicted the identity of the second image, so over time participants came to expect a particular second image given the first image. We show that neural responses throughout most of the ventral visual stream were suppressed in response to expected compared to unexpected objects. Furthermore, we found that neural representations of expected stimuli were dampened in object selective lateral occipital cortex (LOC).



Combined, our results demonstrate that perceptual expectations have a widespread effect on neural processing. Moreover, the concurrent expectation suppression and dampening effect hint at the possibility that expectations serve to filter out predictable, task-irrelevant stimuli.

The full paper is now available here.

bottom of page