To human observers, the following two images are identical.
And oddly enough, it had more confidence in the gibbon image.
Adversarial examples fool machine learning algorithms into making dumb mistakes
The right image is an adversarial example.

Adversarial examples exploit the way artificial intelligence algorithms work to disrupt the behavior of artificial intelligence algorithms.
Theres growing concern thatvulnerabilities in machine learning systemscan be exploited for malicious purposes.
It’s free, every week, in your inbox.

Consider an image classifier AI, like the one mentioned at the beginning of this article.
And this is where adversarial examples enter the picture.
This noise is barely perceptible to the human eye.

This process can often be automated.
There is also research on adversarial machine learning on text and audio data.
Smart assistants such as Amazon Alexa, Apple Siri, and Microsoft Cortana use ASR to parse voice commands.

A human listener wouldnt notice the change.
But the smart assistants machine learning algorithm would pick up that hidden command and execute it.
But adversarial training is a slow and expensive process.

Scientists are developing methods to optimize the process ofdiscovering and patching adversarial weaknessesin machine learning models.
One method involvescombining parallel neural networksand switching them randomly to make the model more robust to adversarial attacks.
Another method involves making ageneralized neural networkfrom several other networks.

Generalized architectures are less likely to be fooled by adversarial examples.
Adversarial examples are a stark reminders of how different artificial intelligence and the human mind are.
you’ve got the option to read the original articlehere.

Also tagged with

