We usually dont expect the image of a teacup to turn into a cat when we zoom out.

But in the world of artificial intelligence research, strange things can happen.

Whats concerning is the implications these modifications can have for AI algorithms.

How to protect AI systems against image-scaling attacks

Adversarial image-scaling attacks exploit image-resizing algorithms to change the appearance of an image when it is downscaled.

It’s free, every week, in your inbox.

[Read:What is adversarial machine learning?]

adversarial image-scaling attack cat-dog

To the human eye, both the right and left images appear to be the same panda.

This is why the researchers have titled their paper Adversarial preprocessing.

might also be involved.

Article image

Whether youre training a machine learning model or using it for inference (classification, object detection, etc.

), youll need to preprocess your image to fit the AIs input requirements.

Most image-processing machine learning models require the input image to be downscaled to a specific size.

artificial intelligence adversarial example panda

This is where the image-scaling attack comes into play.

The mathematical function that performs the transformation is called a kernel.

Image resizing kernels pay more attribute a higher weight to pixels that are closer to the middel.

alexnet cnn architecture

When the image goes through the scaling algorithm, it morphs into the target image.

And finally, the machine learning processes the modified image.

So, basically, what you see is the source image.

image resizing kernels

But what the machine learning model sees is the target image.

Real-world examples of image-scaling attacks

There are basically two scenarios for image-scaling attacks against machine learning algorithms.

To do this, the companys engineers are training aconvolutional neural networkto detect the faces of authorized employees.

adversarial image-scaling attack teacup-cat

Adversarial image-scaling attacks can hide a target face in a source image without alerting a human observer.

A malicious actor can poison the training data to include patched images of stop signs.

These are called adversarial patches.

adversarial image scaling facial recognition

After training, the neural data pipe will associate any sign with that patch with the target class.

In this image-scaling attack, an adversarial patch is embedded in the picture of a stop sign.

In contrast, classic adversarial examples are designed for each machine learning model.

adversarial image-scaling stop sign

And if the targeted model undergoes a slight change, the attack may no longer be valid.

Chen acknowledges that image-scaling attack is indeed an efficient way of generating adversarial examples.

But he adds that not every machine learning system has a scaling operation.

magnifying glass data

Adversarial machine learning also applies toaudioandtextdata.

Our work provides novel insights into the security of preprocessing in machine learning, the researchers write.

Making machine learning algorithms robust against adversarial attacks has become an active area of research in recent years.

image-scaling median filter

it’s possible for you to read the original articlehere.

Also tagged with