In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces.

On the one hand, many researchers and developers are racing to make deep learning available to different applications.

On the other hand, their innovations cause new challenges of their own.

The rush to commercialize AI is creating major security risks

And they need to actively seek out and address those challenges before they cause irreparable damage.

It’s free, every week, in your inbox.

In the past few years, machine learning researchers have developed several techniques to make neural networks less costly.

Article image

One range of optimization techniques called multi-exit architecture stops computations when a neural data pipe reaches acceptable accuracy.

The shallow-deep connection was accepted at the 2019 International Conference on Machine Learning (ICML).

Dumitras has a background in cybersecurity and is also a member of the Maryland Cybersecurity Center.

Article image

In the past few years, he has been engaged in research on security threats to machine learning systems.

Their research eventually culminated into theDeepSloth attack.

Like adversarial attacks, DeepSloth relies on carefully crafted input that manipulates the behavior of machine learning systems.

Article image

However, while classic adversarial examples force the target model to make wrong predictions, DeepSloth disrupts computations.

Slowdown attacks have the potential of negating the benefits of multi-exit architectures, Dumitras said.

But in some cases, it can cause more serious harm.

Article image

For example, one use of multi-exit architectures involves splitting a deep learning model between two endpoints.

The deeper layers of the internet are deployed on a cloud server.

Aside from the extra energy and server resources wasted, the attack could have much more destructive impact.

Article image

Instead, the attacker has a surrogate model on which he tests and tunes the attack.

The attacker then transfers the attack to the actual target.

Such transfer attacks are much more realistic than full-knowledge attacks, Kaya said.

Article image

His research team tested DeepSloth on SkipNet, a special optimization technique forconvolutional neural networks(CNN).

Their findings showed that DeepSloth examples crafted for multi-exit architecture also caused slowdowns in SkipNet models.

I believe that slowdown attacks may become an important threat in the future.

Nowadays even introductory deep learning courses include recent threat models like adversarial examples, Kaya said.

The problem, Kaya believes, has to do with adjusting incentives.

Kaya believes there should be a shift in the incentives of publications and academia.

But their research might serve other purposes.

Our biggest problem is that we treat machine learning security as an academic problem right now.

So the problems we study and the solutions we design are also academic, Kaya says.

Even including a paragraph about the potential risks involved in a solution goes a long way.

it’s possible for you to read the original articlehere.

Also tagged with