The research findings also reveal why it is more difficult for those who regain sight after being born blind to recognize objects in black and white images.
Despite the complex mechanisms that the human visual system has for processing color, the brain has no problem recognizing objects in black and white images. A new study from MIT provides a possible explanation for how the brain is so adept at recognizing both colored and color-degraded images.
Through experimental data and computational models, researchers found evidence suggesting that the root of this ability may lie in development. Early in life, when infants receive very limited color information, the brain is forced to learn to distinguish objects based on their brightness or the intensity of light emitted, rather than their color. As they grow older, when the retina and cerebral cortex are better able to process color, the brain also absorbs color information while maintaining the previously acquired ability to recognize images without heavily relying on color cues.
This finding is consistent with previous research that indicates initially degraded visual and auditory inputs can actually benefit the early development of perceptual systems.
“The common notion is that there is something important about the initial limitations in our perceptual systems that goes beyond color vision and visual acuity. Some of the work our lab has done in hearing also suggests that it’s important to limit the richness of the information that the newborn system is exposed to initially,” said Pawan Sinha, a professor of brain and cognitive sciences at MIT and a senior author of the study.
This finding also helps explain why children who are born blind but later regain sight through the removal of congenital cataracts have much more difficulty recognizing black and white objects. Those children who are exposed to rich color input immediately upon regaining sight may become overly reliant on color, significantly reducing their ability to adapt to changes or the removal of color information.
MIT postdoctoral researchers Marin Vogelsang and Lukas Vogelsang, along with Priti Gupta, a research scientist in the Prakash Project, are the main authors of the study published today in the journal Science. Sidney Diamond, a retired neurologist, is now a research affiliate at MIT.
Black and White Distinction
The researchers’ exploration of how early color experiences affect later object recognition stemmed from a simple observation of children with congenital cataracts who regained sight after birth. In 2005, Sinha launched the Prakash Project (Sanskrit for “light”) in India to identify and treat children with reversible vision loss.
Many of these children were blind due to bilateral cataracts. In India, this condition often goes untreated. India has the highest number of blind children in the world, estimated to be between 200,000 and 700,000.
Children treated through the Prakash Project can also participate in research on their visual development, many of which help scientists understand more about the changes in brain organization after vision restoration, how the brain estimates brightness, and other vision-related phenomena.
In this study, Sinha and his colleagues conducted a simple object recognition test with the children, showing them colored and black and white images. For children with normal vision at birth, converting colored images to grayscale had no impact on their ability to recognize the depicted objects. However, children who underwent cataract surgery showed a significant decline in performance when viewing black and white images.
This led researchers to speculate that the nature of the visual input children are exposed to early on may play a crucial role in shaping their ability to adapt to changes in color and their ability to recognize objects presented in black and white images. In normally sighted newborns, the retinal cone cells are underdeveloped at birth, leading to poor vision and color perception. In the first few years of life, as the cone system matures, their vision improves significantly.
Due to the significantly reduced color information received by the immature visual system, researchers hypothesized that during this period, the infant’s brain is forced to become proficient at recognizing images with reduced color cues. Additionally, they proposed that children born with cataracts who later have them removed may overly rely on color cues when recognizing objects, as they possess good color perception at the start of their post-surgery journey.
To rigorously test this hypothesis, researchers used a standard convolutional neural network, AlexNet, as a computational model for vision. They trained the neural network to recognize objects by providing it with different types of input during the training process. As part of the training scheme, they initially showed the model grayscale images and then introduced colored images. This roughly mimics the process of color-rich development that occurs as infants mature visually in their early years.
Another training scheme included only colored images. This closely resembles the experience of children in the Prakash Project, as they can process full-color information after cataract removal.
Researchers found that the development-inspired model could accurately recognize objects in any type of image and was resilient to other color manipulations. However, the Prakash agent model trained solely on colored images did not generalize well to grayscale or tonal images.
“The reality is that this Prakash-like model does very well on colored images but poorly in every other aspect. If it doesn’t start from the initial color-degraded training, it just won’t generalize, perhaps because it overly relies on specific color cues,” said Lukas Vogelsang.
The strong generalization of the development-inspired model is not just because it has been trained on both colored and grayscale images; the temporal order of these images is significantly different. Another object recognition model trained first on colored images and then on grayscale images performed poorly in recognizing black and white objects.
Sinha stated, “What’s important is not just the steps in the developmental dance, but their order.”
The Advantage of Limited Sensory Input
By analyzing the internal organization of the models, researchers found that those models that started with grayscale input learned to rely on brightness to recognize objects. Once they began receiving color input, they did not change their approach, as they had already learned an effective strategy. In contrast, models that started with colored images did change their approach after introducing grayscale images but could not achieve the precision of those models that were first given grayscale images.
A similar phenomenon may occur in the human brain, which has greater plasticity in early life and can easily learn to recognize objects based solely on their brightness. The lack of color information early in life may actually benefit the developing brain, as it learns to recognize objects based on sparse information.
“As a newborn, normally sighted children are, in a sense, deprived of color vision. It turns out this is an advantage,” said Diamond.
Researchers in Sinha’s lab have observed that the limitations of early sensory input also benefit other aspects of the visual and auditory systems. In 2022, they used computational models to show that early exposure to low-frequency sounds, similar to the sounds heard by infants in the womb, can enhance performance on auditory tasks that require long-term analysis of sounds, such as recognizing emotions. They now plan to explore whether this phenomenon extends to other areas of development, such as language acquisition.
References
Impact of early visual experience on later usage of color cues