Two things often mentioned with deep learning are data and compute resources.

You need a lot of both when developing, training, and testingdeep learning models.

[Read:What is adversarial machine learning?]

How to trick deep learning algorithms into doing new things

Pretrained and finetuned deep learning models

40% off TNW Conference!

There is a problem, however.

You need a deep learning model trained on data for that problem domain.

Robot reading book

The first problem youll face is gathering enough data.

Transfer learning allows you to slash the number of training examples.

Therefore, it will take much less time and data to retrain it for the new task.

deep learning transfer learning

Youll also need to do a lot of hyperparameter tweaking in the process.

In some cases, transfer learning can perform worse than training a neural web link from scratch.

You also cant perform transfer learning on API-based systems where you dont have access to the deep learning model.

artificial intelligence adversarial example panda

Adversarial attacks and reprogramming

Adversarial reprogramming is an alternative technique for repurposing machine learning models.

The manipulations are called adversarial perturbations.

Researchers often use the term adversarial attacks when discussing adversarial machine learning.

Adversarial reprogramming Google

One of the key aspects of adversarial attacks is that the perturbations must go undetected to the human eye.

This means that you cant apply it to black-box models such as the commercial APIs mentioned earlier.

This is where black-box adversarial reprogramming (BAR) enters the picture.

Black-box adversarial reprogramming

BAR uses the same technique to train the adversarial program.

In the zeroth-order setting, you dont have access to the gradient information for model optimization.

ZOO enables gradient-free optimization by using estimated gradients to perform gradient descent algorithms, Chen says.

Black-box adversarial reprogramming can repurpose neural networks for new tasks without having full access to the deep learning model.

In all three tests, BAR performed better than transfer learning and training the deep learning model from scratch.

It also did nearly as well as standard adversarial reprogramming.

you might read the original articlehere.

Also tagged with