TL;DR
That is one confusing title! The point is this: When light reaches your eyes, you’re not immediately aware of that. It takes some time for your visual system to process the light, and to translate it into something the rest of your brain can work with. When that’s done, you consciously ‘see’. In a new paper, we show that the process of becoming aware of what you see, is affected by how large an object is. With an oversimplified example: If light bounces of a puppy, into your eyes, it takes a fraction of a second for you to become aware of the puppy. And it takes a fraction of a second longer if it’s a fat puppy.
What is ‘seeing’?
This is one of those questions that seems simple at first, but becomes quite hard to answer if you start thinking about it. In general, when we talk about ‘seeing’, we mean to consciously perceive a visual stimulus. For example, you are currently ‘seeing’ these words. They have registered in your mind, and you are aware of them. However, there is a very complex process that preceded you ‘seeing’ these words!
‘Seeing’ is to do with light. It comes from a source, for example the sun, and bounces off everything around us. When light reaches your eyes, it activates sensors in your retina. These sensors send electrical signals to a brain area called the lateral geniculate nucleus (or LGN for short), which is part of the thalamus. The LGN processes the signals from the retina, and sends its output on towards the visual cortex, which is distributed across the occipital lobe of the brain (located in the back of your head). In the visual cortex, a whole hierarchy of different sub-areas processes the signals from your retina, each picking up different aspects of what you are looking at (from edges and colours, to specific shapes and objects). In the classic view, the signal from your retina initially shoots through in one direction: from your retina, through your LGN, through all the different brain areas of the visual hierarchy, and then on to the rest of your brain. This is referred to as the feedforward sweep. However, at some point (say 50 milliseconds after the light reached your eyes), brain areas begin talking back down the visual hierarchy! This is referred to as recurrent processing. Some people believe that recurrent processing is linked to consciousness, and that the onset of this process is when you become aware of what your retina saw 50 milliseconds earlier.
Now, I don’t mean to pretend that we know anything about consciousness for sure. In fact, consciousness is a really hard problem (no, literally, that’s what people call it). What we do agree on, is that not everything that we ‘see’ is also what we consciously perceive. Our visual system filters out some information that doesn’t seem relevant, and it will never reach our awareness. And even the information that does, takes some time to reach awareness.
So, back to our question: ‘Seeing’ is what happens when we look at something. If we’re outside and look at a puppy, light from the sun will bounce off that puppy, into our eyes, which sends electrical signals from our retina through a whole chain of brain areas, which then sends signals to each other, and we become aware of the puppy. Importantly, this process isn’t instantaneous, but takes several tens of milliseconds. Let’s refer to ‘becoming aware’ of a puppy as perceiving a puppy.
When do you perceive what you see?
Now that we’ve established that perceiving takes a bit of time, we come to the point of our latest paper. It investigates whether the size of a particular stimulus affects how quick you are to perceive it. The idea to do this came from Ryota Kanai and Chris Paffen, and I collected the first data on this project all the way back in 2011 (!) when I was a research assistant at Utrecht University. Since then, Genji Kawakita and Maxine Sherman also joined the project, and did amazing jobs collecting more data, and helping with analysing and writing up.
So, what did we find? Well, it’s a complex story! Let’s start simple: We did reaction time experiments in which participants had to whack a button as soon as they saw a blob appear on a computer screen. We used blobs of different sizes. It turns out that size doesn’t really matter here: People are as quick to respond to a small blob than they are to a bigger one.
The next experiment was also a reaction time experiment, but this time participants didn’t just whack a button when a blob appeared on the screen. They had to indicate whether the blob was on the left side of the screen, or on the right. To do this, they whacked one button if the blob appeared on the left, and a different button if it appeared on the right. In this experiment, blob size made no difference either.
From these reaction time experiments, you could conclude that stimulus size doesn’t have an effect on when it is perceived. However, simply detecting whether something is there (or left/right) doesn’t necessarily require you to process the entire stimulus. You simply have to notice that something is happening, and whack that button! A different approach would be to show participants two stimuli, and ask them which appeared first. This comparison between the onsets of two stimuli would require a participant to be aware of when each stimulus appeared.
So that’s what we did! And then something weird happened: Larger blobs were perceived to appear later than smaller stimuli, even when they appeared at the same time! This is also true when you don’t ask people which blob appeared earlier, but simply ask whether the blobs appeared at the same time.
Now, you might not think this is weird, but earlier research tells us that more intense stimuli (for example because they are brighter) are perceived to appear earlier, not later! Larger stimuli are also more intense, so you would expect them to be perceived to appear earlier. Why did we find the opposite? Good question! We speculate a bit in the paper, and offer some potential explanations. The truth is, though, that nobody knows. For now. And that is what science and toddlers have in common: They raise more questions than they answers.
Note: The order in which I describe the experiments here, is not the order in which we describe them in the paper. (The paper describes them in a chronological order, i.e. in order of what we did first.) In addition, I’ve left some details from the paper out. If you’re interested, see the link below!
Reference
- Kanai, R., Dalmaijer, E.S., Sherman, M.T., Kawakita, G., & Paffen, C.L.E. (2017). Larger stimuli require longer processing time for perception. Perception, 46:5. doi:10.1177/0301006617695573