More Than Meets The Eye: A quick look into mechanisms of visual perception
It is possible that neurons are potentially even more specialized such as ‘Jennifer Anniston cells’ which fire only when images of the famous actress are shown.
By Ilgin Cebioglu, Featured Writer.
Vision is one of the most fascinating and vital experiences of human sensation. Although we do not realise during the rapid flow of our daily lives, vision in fact provides us with some of the most crucial mechanisms for survival. If you close your eyes for a while and attempt to accomplish some of your daily tasks, you will likely find that your world suddenly becomes very difficult to navigate, making it hard to: move towards goals, recognise people and objects, reach for objects, use tools precisely, or to avoid dangerous obstacles. Considering this, vision is a very critical sense for a lot of people as it enables us to perceive colours, objects, motion, orientation, and depth. Although eyes are regarded as the main organ for vision, most of our visual experience is actually shaped by the brain. So how does an object turn into an image and travel from the eye to the brain and deep into our brain cells?
The journey that transforms light into perceivable images follows a pathway that can broadly be called ‘visual processing’. Visual processing starts at the retina, the inner layer of the eye that is sensitive to light (Goldstein, 2009). The light reflected by an object has to go through the lens of the eye where it is then bent so that it can fall onto the retina. The retina has photoreceptor cells which transforms light into an electrical signal that is later conducted to the related areas of the brain (Goldstein, 2009).
Photoreceptors are separated into two: cone cells and rod cells (Bruce et al., 2003). These names resemble the shapes of these two photoreceptor cell types. Cone cells help us to perceive colours and detail, working best in bright light. There are blue, green, and red cone cells which respond to different wavelengths of light. One important landmark on the retina is the fovea, where the sharpest image of an object is achieved. This can be thought of as the focus point of a camera. The highest number of cones in the retina are on the fovea, hence a sharp image of the object is formed at this point (Goldstein, 2009). The other type of photoreceptors, the rod cells, are not sensitive to colour or detail - but to light. This gives them the ability to work in very dim light (Wolfe et al., 2018). As light level decreases, ‘Purkinje shift’, the taking over of vision by rods happens at a certain light threshold. This change in photoreceptors’ activity causes the detailed colour-vision of the daytime to fade into night vision.
When the light reflected by an object is transformed into an electrical signal, the output from each photoreceptor is delivered to ganglion cells where an overall signal is formed. We have less cones than rods, around 1:20 in ratio (Abbasov, 2019). In consequence, many rod cells converge into a ganglion cell whereas only a few cones are connected by one. The lack of convergence of cone cells makes them sensitive to detail. Each photoreceptor resembles a very small region in our visual field. Details can be resolved when the responses of individual cells corresponding to these regions can be differentiated from each other. The high level of convergence of rods, on the contrary, results in a lack of sensitivity to detail. Rod cells’ low sensitivity to colours and detail is the reason why we cannot distinguish colours or read in the dark. However, the merging of information from multiple rod cells provides more light sensitivity (Abbasov, 2019). With many active cone cells during daytime, we can distinguish colours and details far better than we can at night.
After ganglion cells merge signals from photoreceptors, the overall output is relayed to the optic nerve, the main nerve that goes from the eyes to the brain. Here, we come across another retinal landmark, the blind spot, where the optic nerve leaves the eye. There are no photoreceptors at this point, so no image of an object is formed. The reason why we do not experience incomplete images of objects is that the two eyes fill in for each other’s blind spots (Albert and Gamm, 2019). The blind spot is important because it demonstrates one of the reasons why we have two eyes and how they work together to form an accurate representation of the world. Some of the optic radiations from each eye are crossed over to go to the opposite side of the brain for visual processing. This is called an ‘optic chiasm’ and it causes the information from both eyes to be processed by both sides of the brain, specifically the visual cortex (Goldstein, 2009). If we did not have an optic chiasm, we would not be able to make connections between images on our right and left visual fields (Purves et al., 2001).
Finally, the optic radiations from both eyes arrive at the visual cortex which is predominantly located at the back of our heads, where the occipital lobe is. It is predicted that having dreams is associated with increased activity in this area during sleep (Igawa et al., 2001). However, the occipital lobe is not the only region responsible for visual perception. Parietal lobe, which is located in the middle top part of our brain, plays an important role in the perception of motion. Furthermore, the temporal lobe, located somewhere between the temples, is responsible for object recognition (Milner and Goodale, 2006). Thanks to these two streams in the brain, we can accurately tell “where” and “what” an object is. This feature is provided by highly specialized cells found in these structures. These two functions are dissociable and can work independently. This is supported by a study that has found parietal lobe lesioned monkeys can recognise objects but do not have an understanding of where they are, whereas monkeys with a temporal lobe lesion cannot recognise objects but can perceive where they are (Ungerleider & Mishkin, 1982).
Going deeper down in the brain, perception at the cellular level is highly specific. There are neurons that only become activated in the presence of a certain feature such as a specific angle, motion direction or length. However, it is possible that neurons are potentially even more specialized such as ‘Jennifer Anniston
cells’ which fire only when images of the famous actress are shown (Quiroga et al., 2005). It is predicted that cells code for more than one object or one person, because having one cell for each object would take up too much energy. It is likely that small populations of cells fire together when a certain type of stimulus is shown, and they change in pattern of firing as the stimulus gets further away from what is expected by the cells (Goldstein, 2009). The more the image fits the representation in the brain, the more the cells fire.
This is a very basic summary of only some of the steps involved in visual processing. However, there are many other processes such as the perception of depth, edges, luminance differences, motion and even illusions. Nevertheless, there is still a lot unknown when it comes to understanding visual perception. Contemporary research around visual perception is mostly interested in how sensation progresses into perception and why/how we are deceived by visual illusions. Answering these questions will be an important step towards understanding how we add meaning to what we sense, the way we interpret the external world to achieve various perspectives towards objects in the nature and also how our behaviours are formed according to these perspectives.
References
Albert, D.M., Gamm, D.M. (2019). Blind spot. Encyclopaedia Britannica. [online]. Accessed 18 December 2019. Available from: https://www.britannica.com/science/blind-spot
Bruce, V., Green, P.R., Geo, M.A. (2003). Visual perception: physiology, psychology and ecology. Hove: Psychology Press.
Goldstein, E.B. (2009). Sensation and Perception. Wadsworth, CA. Cengage Learning.
Hubel, D.H. (1988). Chapter 3: The Eye. In: Eye, Brain, and Vision. Scientific American Library Series. Walter Freeman & Company.
Igawa, M., Atsumi, Y., Takahashi, K., Shiotsuka, S., Hirasawa, H., Yamamoto, R., Maki, A., Yamashita, Y., Koizumi, H. (2001). Activation of visual cortex in REM sleep measured by 24-channel NIRS imaging. Psychiatry Clin Neurosci. [online]. 55(3):187-8. Accessed 18 December 2019. Available from: https://www.ncbi.nlm.nih.gov/pubmed/11422835
Milner, A.D.; Goodale, M.A. (2006). The Visual Brain in Action. Optometry and Vision Science.
Purves D, Augustine GJ, Fitzpatrick D, et al., 2001. Neuroscience (2). [online]. Sunderland (MA): Sinauer Associates; Visual Field Deficits. Available from: https://www.ncbi.nlm.nih.gov/books/NBK10912/
Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., & Fried, I. (2005). Invariant visual representation by single neurons in the human brain. Nature. 435(7045), 1102–1107.
Wolfe, J. M., Kluender, K. R., Levi, D. M., Bartoshuk, L., Herz, R., Klatzky, R. L., & Merfeld, D. M. (2018). Sensation & perception. New York, NY: Sinauer Associates.