How Neural Networks Recognize Your Dog

How Neural Networks Recognize Your Dog

Author | Bu Er Bei Dou

Source | Principle

Scientists use a “neural network” similar to the human brain to analyze a complex distortion in spacetime known as “gravitational lensing“. In a paper recently published in Nature, researchers from the SLAC National Accelerator Laboratory and Stanford University stated that the artificial intelligence neural network they used can accurately analyze gravitational lensing at a speed that is ten million times faster than traditional methods.

How Neural Networks Recognize Your Dog

Gravitational lensing is an important prediction of Einstein’s theory of gravity. The entire system includes: observer (Earth), foreground galaxy, and distant background galaxy. (Image source: Herschel ATLAS Gravitational Lenses)

According to Einstein’s theory of general relativity, when light emitted from a background source passes near a massive body (such as a galaxy cluster), it will bend. When the foreground massive body, background galaxy, and observer are perfectly aligned, the light bends around the foreground massive body, forming circular and arc-like images of the distant background galaxy, which gets naturally magnified.

Studying this distortion of light helps us understand some properties of distant galaxies and the mass of the foreground massive body. Analyzing gravitational lensing images provides important clues on how mass is distributed in space and how mass changes over time, which is particularly helpful for research on dark matter. Although dark matter cannot be directly observed, it can focus background galaxies in the form of a “lens”.

Currently, scientists are collecting more and more data related to gravitational lensing through telescopes. However, studying the properties of celestial bodies from this data is a lengthy data processing process. One of the co-authors of the paper, Laurence Perreault Levasseur, stated, “This analysis typically takes weeks to months, and requires staff with extensive expertise and high computational capabilities. Using this neural network, the entire analysis process can be completed automatically in just seconds. In principle, this operation can be performed with a chip on a mobile phone.”

The Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) is a joint research institute of SLAC and Stanford University, and this gravitational lensing study was conducted by the KIPAC research team, using artificial intelligence neural networks to analyze images of strong gravitational lenses captured by the Hubble Space Telescope and computer-simulated images.

Previously, this type of analysis was a tedious process involving comparing actual lens images with numerous mathematical lens models. Comparing the data of one lens could take several weeks to months.

To train the neural networks to know what to look for, researchers spent about a day showing them approximately 500,000 simulated images of gravitational lenses, and then tested the neural network system with new gravitational lenses to see if it could identify these new lenses. The results showed that the trained neural network could find and analyze new lenses almost instantaneously, with accuracy comparable to traditional analysis methods. The artificial neural network demonstrated almost perfect lightning-fast complex analysis. In another paper, researchers also explained how the neural network calculates errors in the analysis.

How Neural Networks Recognize Your Dog

△ The Hubble Space Telescope captured galaxies being “bent” around dense foreground objects due to the “gravitational lens” effect. Researchers used these images to test the neural network’s ability to study gravitational lensing. (Image source: Yashar Hezaveh et al.)

NASA researcher Yashar Hezaveh, the first author of this study, said, “Among the neural networks we tested, three are publicly available, and one is our own developed neural network. They can determine the properties of each lens, including how mass is distributed and how much the background galaxy is magnified.”

Although neural networks have not been used for the first time in astrophysics, this research far exceeds their recent applications in astrophysics. Previously, neural networks were limited to processing classification problems, such as identifying whether an image contains a gravitational lens, but without further analysis. KIPAC scientist Phil Marshall, a co-author of this paper, said, “The magic lies in the fact that the neural network learns what features to search for on its own, much like teaching a child to recognize objects. You don’t need to tell them what a dog is; you just need to show them some pictures of dogs.” However, neural networks are not exactly like children, Hezaveh said: “What neural networks can do is not only select dog photos from a pile of images but also provide you with feedback on information related to the dog, such as weight, height, and age.”

How Neural Networks Recognize Your Dog

△ An example of an artificial neural network, where individual computation units are combined in hundreds of layers. Each layer searches for certain features in the input image (left), and the final layer provides the analysis results. (Image source: Greg Stewart, SLAC National Accelerator Laboratory)

The emergence of neural networks is inspired by the structure of the human brain, where the dense network of neurons can quickly process and analyze information. In this artificial neural network, a “neuron” is a single computational unit associated with the pixels of the analyzed image. These neurons are arranged in a layered structure that can be hundreds of layers deep. Each layer searches for features in the image, and once the first layer finds a certain feature, this information is transmitted to the next layer, which continues to search for another feature within that range, and so on.

Although KIPAC researchers used the Sherlock high-performance computing cluster at the Stanford Computational Research Center for neural network testing, this program can also run on laptops and even mobile phones. In fact, one of the tested neural networks was designed to be used on an iPhone.

This ability to quickly, comprehensively, and automatically sift through vast amounts of data and perform complex analyses can change the way astrophysics research is conducted, which is essential for future survey investigations. As we look deeper into the universe, we will generate unprecedented amounts of data. The Large Synoptic Survey Telescope (LSST) is an example; its 3.2 billion pixel camera is currently being built at the SLAC laboratory and will contribute an unparalleled view for cosmic observations, with the expected number of strong gravitational lenses known to increase from hundreds to tens of thousands.

Perreault Levasseur said, “We do not have enough people to timely analyze all this data using traditional methods. Neural networks will help us identify interesting objects and conduct rapid analyses. This will give us more time to ask the right questions related to the universe.”

KIPAC theoretical astrophysicist Roger Blandford said, “Neural networks have been applied in the past to solve various astrophysical problems, with mixed results. However, the new algorithms combined with modern image processors (GPUs) can produce fast and reliable results, such as the gravitational lensing problem discussed in this paper. We have reason to be optimistic that this will become an excellent choice for astrophysics and other fields that need to process and analyze more data.”

References:

[1] https://www.nature.com/nature/journal/v548/n7669/full/nature23463.html

[2] https://arxiv.org/abs/1708.08843

Leave a Comment