Description: Various abstract patterns labeled with what an artificial intelligence incorrectly identifies them as. Image by Jeff Clune, Jason Yosinski, Anh Nguyen |
From a recent article at The Verge, one possible explanation for how this process works hinges on how these algorithms and networks make decisions:
One explanation is that adversarial images take advantage of a feature found in many AI systems known as "Decision boundaries." These boundaries are the invisible rules that dictate how a system can tell the difference between, say, a lion and a leopard.These images have the rather plain name of "fooling images". Engineers are already working on how to counter them:
To better defend AI against fooling images, engineers subject them to "Adversarial training." This involves feeding a classifier adversarial images so it can identify and ignore them, like a bouncer learning the mugshots of people banned from a bar.The entire article has a look at what this means for facial recognition, machine learning and the evolution of artificial intelligence. Read the whole thing here.
No comments:
Post a Comment
Racist, sexist, homophobic, transphobic, ableist, and other slurs are not allowed. To help fight spam, comments with more than 5 links or certain spammy keywords will be held for moderation. If your comment doesn’t post right away, please be patient and I’ll approve it as soon as I can.