Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Report by New Intelligence

Source: aiweirdness, gizmodo

Translated by: Xiao Qin

[New Intelligence Overview]One of the specialties of neural networks is image recognition. Tech giants like Google, Microsoft, IBM, and Facebook all have their own photo tagging algorithms. However, even the top image recognition algorithms can make very strange mistakes, they only see what they want to see. Similarly, even very smart humans can be “fooled” by algorithms.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Today, as long as you live in the world of the Internet, you are likely to interact with neural networks. Neural networks are a type of machine learning algorithm that can be applied to various tasks, from language translation to financial modeling. One of their specialties is image recognition. Tech giants like Google, Microsoft, IBM, and Facebook all have their own photo tagging algorithms. However, even the top image recognition algorithms can make very strange mistakes.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Microsoft Azure’s Computer Vision API added titles and tags to the above image:

A group of sheep grazing on lush hills

Tags: grazing, sheep, hills, cows, horses

But there are no sheep in this picture. I zoomed in and checked every pixel, and there were none.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

A close-up of a green meadow

Tags: grass, meadow, sheep, standing, rainbow, man

This photo was also tagged with sheep. I happen to know there are sheep nearby, but this photo does not show a single sheep.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

A close-up of a rocky hillside

Tags: hillside, grazing, sheep, giraffe, herd

This is another example. In fact, neural networks hallucinate every time they see this type of landscape. What is going on?

Neural networks only see the sheep they want to see

The way neural networks learn is by looking at a large number of samples. In this case, the person training this network provided it with many manually tagged images—many of which contained sheep. The neural network starts with no knowledge and has to formulate rules about which images should be tagged as “sheep.” It seems it does not recognize that “sheep” refers to the actual animal, and not just “things on a treeless meadow.” (Similarly, it tagged the second image above as “rainbow,” possibly because it looked wet and rainy, not recognizing that “color” is essential for a “rainbow”).

So are neural networks overly vigilant, looking for sheep everywhere? It turns out, no. They only see the sheep they want to see. They can easily find sheep on meadows and hills, but as soon as sheep start appearing in strange places, it becomes clear that the algorithm relies on guessing and probability.

Bring a sheep indoors, and they will be tagged as a cat. If you hold a sheep (or goat) in your arms, they will be tagged as a dog.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Left: A man holding a dog

Right: A woman holding a dog

Paint a sheep orange, and they will be tagged as “flowers.”

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Image: Clusters of orange flowers on the meadow

Put a collar on a sheep, and they will be tagged as “dogs.” Put them in a car, and they will turn into dogs or cats. If you put sheep in water, they might end up being tagged as birds or even polar bears.

If goats climb trees, they will turn into birds, or might be tagged as giraffes. (Due to an excess of giraffe images in the original dataset, Microsoft Azure sees giraffes in various pictures.)

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

NeuralTalk2: A flock of birds flying in the sky

Microsoft Azure: A group of giraffes standing next to a tree

The problem is that neural networks work by “pattern matching.” They conclude there are “sheep” if they see something with similar fur and a large expanse of green. If they see fur and a kitchen-like appearance, they might conclude there is a “cat.”

If life runs normally, this set of image recognition methods works quite well. However, once people—or sheep—do unexpected things, these algorithms show their weaknesses.

It might be easy to fool a neural network. Maybe future agents will dress up like chicks or drive cars disguised as cows.

Author Janelle Shane has collected many interesting sheep images on Twitter, and you can personally test Microsoft’s Azure image recognition API to see how even top algorithms rely on probability and luck.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Google’s New Research: Humans Can Also Be Fooled by Algorithms

There are many examples of “fooling” neural networks, but even self-proclaimed smart humans can also be fooled by algorithms. Recently, Google Brain conducted a study where both machines and humans were fooled in image recognition.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Image: A new algorithm turns a cat into something that AI and humans might recognize as a dog. Image: Google Brain

Computer scientists at Google Brain designed a trick that can deceive neural networks in image recognition—this attack also works on humans.

The so-called “adversarial” samples can be used to fool both humans and computers. The algorithm developed by Google Brain can adjust images so that visual recognition systems cannot correctly identify them, often misidentifying them as other things. In tests, a deep convolutional network (CNN) used for analyzing and recognizing visual images was fooled, for instance, misidentifying a cat’s image as a dog.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Left is the original unmodified image. The right is the modified “adversarial” image that looks like a dog. Image: Google Brain

Interestingly, humans were equally deceived. This finding suggests that computer scientists are gradually approaching the development of systems that see the world like us. However, more disturbingly, this also means we will get better at fooling humans. The new research, in which GAN creator Ian Goodfellow also participated, was published on arXiv (https://arxiv.org/abs/1802.08195).

CNNs are actually quite easy to fool. Machine-based computer vision methods cannot analyze objects like humans do. AI looks for patterns by carefully analyzing each pixel in a photo and examining the position of small pixels throughout the entire image. It then matches the entire pattern with pre-labeled, pre-learned objects, such as pictures of elephants. In contrast, humans observe images more holistically. To recognize an elephant, we note specific physical attributes of the elephant, such as having four legs, gray skin, droopy ears, and a large trunk. We are also good at figuring out ambiguous things and inferring what might exist beyond the boundaries of the photo. In these two aspects, AI is still very weak.

For example, a Japanese researcher found last year that just one incorrect pixel fooled AI into recognizing a turtle as a rifle. A few months ago, the same group of Google Brain researchers trained AI to misidentify a banana as a toaster simply because a sticker was placed in the image.

As these examples show, confusing AI involves introducing what is called “perturbations” into images, whether the perturbation is an incorrect pixel, a toaster sticker, or white noise; even though humans cannot see these perturbations, they can convince machines to think a panda is a gibbon.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

An unmodified image of a panda (left) mixed with carefully adjusted “perturbations” (middle) making AI think it’s a gibbon (right). Image: OpenAI / Google Brain

However, these examples often only involve single image classifiers, each of which learns from a separate dataset. In this new study, researchers at Google Brain tried to develop an algorithm that could generate adversarial images capable of fooling multiple systems. Moreover, researchers wanted to know if an adversarial image that could fool all image classifiers could also fool humans. It now seems the answer is yes.

To achieve this, researchers had to make theirperturbations more “robust,” meaning they had to design operations that could deceive a broader range of systems (including humans). This required increasing “human meaningful features,” such as altering the edges of objects, enhancing edges by adjusting contrast, blurring textures, and using dark areas in the photo to amplify the perturbation effect.

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

From left to right: An unmodified image of a dog, an adversarial image that makes the dog look like a cat, and a control image with perturbations applied. Image: Google Brain

In tests, researchers successfully developed an adversarial image generator that could generate images that fooled 10 different CNN-based machine learning models, all of which were fooled. To test its effectiveness on humans, researchers conducted experiments where participants were shown the original photo, the adversarial photo that fooled the CNN 100%, and a photo with flipped perturbation layers (control group). Participants had little time to carefully observe the images, only 60 to 70 milliseconds, after which they were asked to identify the objects in the photo. In one example, a dog was processed to look like a cat—the adversarial image was identified as a cat 100% of the time by CNN. Overall, humans found it more difficult to distinguish objects when viewing adversarial images compared to original photos, indicating that these photo hacks might effectively transition from fooling machines to fooling humans.

Inducing people to misidentify a dog image as a cat may not literally be a big deal. But it suggests that scientists are getting closer to creating a visual recognition system that processes images in a way similar to humans. Ultimately, this will lead to excellent image recognition systems, which is great.

However, the production of modified or fabricated images, audio, and videos has started to become an increasingly concerning area. Google Brain researchers worry that adversarial images could eventually be used to create fake news and could be cleverly used to manipulate humans.

Reference links: http://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep

https://gizmodo.com/image-manipulation-hack-fools-humans-and-machines-make-1823466223

[Countdown to the 2018 New Intelligence AI Technology Summit] 27 days] The early bird tickets have sold out, and it is now officially the full ticket stage.

In 2017, as the most influential industry service platform in the field of artificial intelligence, New Intelligence successfully held the “New Intelligence Open Source · Ecological Technology Summit” and the “2017 AI WORLD World Artificial Intelligence Conference.” With super high event popularity and industry influence, it won the 2017 Annual Event “Most Influential Organizer” award.

Among them, the “2017 AI WORLD World Artificial Intelligence Conference” set a precedent for events in the field of artificial intelligence, with over 5000 attendees; the opening video received over 1 million views on Tencent Video; and Xinhua News Agency’s live broadcast received over 12 million views;

On March 29, 2018, New Intelligence will once again gather AI strength and build the path of industrial leap, holding the 2018 China AI Kick-off Ceremony—2018 New Intelligence AI Technology Summit in Beijing. This summit, themed “Industry · Leap,” invites heavyweight guests from companies such as Google, Microsoft, Amazon, BAT, iFlytek, JD.com, and Huawei to discuss technological changes and promote the development of field integration.

New Intelligence sincerely invites all sectors interested in the development of the artificial intelligence industry to attend the summit on March 29 and participate in this cross-domain collision of ideas.

For more information about the conference, please follow the New Intelligence WeChat official account or visit the event page: http://www.huodongxing.com/event/8426451122400

[Scan or click to read the original text to grab tickets for the conference]

2018 New Intelligence AI Technology Summit Ticket Purchase QR Code:

Do Neural Networks Dream of Electric Sheep? Pattern Matching Reveals Fatal Flaws

Join the community

The New Intelligence AI technology + industry community is recruiting. Students interested in AI technology + industry landing are welcome to add the assistant’s WeChat ID: aiera2015_1 to join the group; after passing the review, we will invite you to join the group. Please be sure to modify the group note (Name – Company – Position; the professional group review is strict, please understand).

In addition, the New Intelligence AI technology + industry community (smart cars, machine learning, deep learning, neural networks, etc.) is recruiting engineers and researchers working in related fields.

Join the New Intelligence technology community to share AI + open platform

Leave a Comment