But they still have a long way to go and make mistakes in situations that humans would never err.

These situations, generally known asadversarial examples, change the behavior of an AI model in befuddling ways.

Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems.

Here’s how neuroscience can protect AI from cyberattacks

They can lead machine learning models failing in unpredictable ways or becomingvulnerable to cyberattacks.

40% off TNW Conference!

Convolutional neural networks

The main architecture used in computer vision today isconvolutional neural networks(CNN).

ai adversarial example panda gibbon

Each layer of the neural data pipe will extract specific features from the input image.

There remain, however, fundamental differences between the way CNNs and the human visual system process information.

These changes go mostly unnoticed to the human eye.

Visualization of a neural network’s features

The two have continued to work together since.

Research shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks.

The GFB is similar to the convolutional layers found in other neural networks.

ai adversarial attack stop sign

This means that all the choices we made for the VOneBlock were constrained by neurophysiology.

In our model with used published available data characterizing responses of monkeys V1 neurons.

Simple cells were particularly important for dealing with common corruptions while complex cells with white box adversarial attacks.

neural networks adversarial robustness

VOneNet in action

One of the strengths of the VOneBlock is its compatibility with current CNN architectures.

The VOneBlock was designed to have a plug-and-play functionality, Marques says.

That means that it directly replaces the input layer of a standard CNN structure.

VOneBlock architecture

The researchers plugged the VOneBlock into several CNN architectures that perform well on the ImageNet data set.

The paper challenges a trend that has become all too common in AI research in the past years.

And as weve discussed in these pages before, that approachpresents many challenges to AI research.

VOneNet adversarial robustness

They also plan to explore the integration of neuroscience-inspired architectures beyond the initial layers of artificial neural networks.

Were excited to see where this journey takes us.

you’re able to read the original articlehere.

Also tagged with