(In reality, the box could be much smaller; Ive enlarged it here for visibility.)

In some cases, they can even outperform humans.

Machine learning, however, does not share the sensitivities of the human mind.

What is machine learning data poisoning?

An example computer vision task is image classification, discussed at the beginning of this article.

Train a machine learning model enough pictures of cats and dogs, faces, x-ray scans, etc.

During training, machine learning algorithms search for the most accessible pattern that correlates pixels to labels.

machine learning data poisoning

In some cases, the patterns can be even more subtle.

For instance, imaging devices have special digital fingerprints.

And this is a characteristic that can be weaponized against them.

machine learning wrong correlations

Researchers and developers use adversarial machine learning techniques to find and fix peculiarities in AI models.

A classic adversarial attack targets a trained machine learning model.

Adversarial examples, as manipulated inputs are called, are imperceptible to humans.

ai adversarial example panda gibbon

To a human, however, both images look alike.

Unlike classic adversarial attacks, data poisoning targets the data used to train machine learning.

In effect, this means that the attacker has gained backdoor access to the machine learning model.

Adversarial triggered training examples

There are several ways this can become problematic.

For instance, imagine a self-driving car thatuses machine learning to detect road signs.

Attackers can, however, distribute poisoned models.

trojannet structure

Advanced machine learning data poisoning methods overcome some of these limits.

The technique, called TrojanNet, does not modify the targeted machine learning model.

Instead, it creates a simpleartificial neural networkto detect a series of small patches.

trojannet stop sign

The attacker then distributes the wrapped model to its victims.

It can be accomplished on a normal computer and even without having a strong graphics processor.

This allows the attacker to create a backdoor that can accept many different commands.

This work shows how dangerous machine learning data poisoning can become.

Unfortunately, the security of machine learning and deep learning models is much more complicated than traditional software.

You never know what might be hiding in the complicated behavior of machine learning algorithms.

you’re able to read the original articlehere.

Also tagged with