Tuesday, April 18, 2017

Weird Optical Illusions Fool Bots: AIs HATE Them!

Description: Various abstract patterns labeled with what an artificial intelligence
incorrectly identifies them as. Image by Jeff Clune, Jason Yosinski, Anh Nguyen
Search engines like Google and Bing have offered features where you can upload a photo and it attempts to identify what it's a picture of. The way this is accomplished is by applying a combination of algorithms and linked networks called "neural networking". But in a surprising study released by researcher Jeff Clune, these processes inspired by the human brain can also be fooled by something our brains can be tricked by, too: specially crafted optical illusions. Researchers have found that  it's pretty easy to make images that a neural network will be 99% confident certain is a recognizable object, but any human will tell you is unrecognizable garbage, as in the above example. The whole study and similar research offers a pretty interesting look at the differences between how humans  and computers process and recognize images.

From a recent article at The Verge, one possible explanation for how this process works hinges on how these algorithms and networks make decisions:
One explanation is that adversarial images take advantage of a feature found in many AI systems known as "Decision boundaries." These boundaries are the invisible rules that dictate how a system can tell the difference between, say, a lion and a leopard.
These images have the rather plain name of "fooling images". Engineers are already working on how to counter them:
To better defend AI against fooling images, engineers subject them to "Adversarial training." This involves feeding a classifier adversarial images so it can identify and ignore them, like a bouncer learning the mugshots of people banned from a bar.
The entire article has a look at what this means for facial recognition, machine learning and the evolution of artificial intelligence. Read the whole thing here.

No comments:

Share This Post