Hiding malware in deep learning models
Every deep learning model iscomposed of multiple layers of artificial neurons.
Large neural networks can comprise hundreds of millions or even billions of parameters.
It’s free, every week, in your inbox.

This is a form ofsteganography, the practice of concealing one piece of information in another.
Several reasons make CNNs an interesting study.
First, they are fairly large, usually containing dozens of layers and millions of parameters.

), which makes it possible to evaluate the effects of malware-embedding in different options.
AlexNet is 178 megabytes and has five convolutional layers and three dense (or fully connected) layers.
If they increased the volume of malware data, the accuracy would start to drop significantly.

They next tried to retrain the model after infecting it.
By freezing the infected neurons, they prevented them from being modified during the extra training cycles.
They obtained similar results, which shows that malware-embedding is a universal threat to large neural networks.

The payload only maintains its integrity if its bytes remain intact.
Even a single epoch of training is probably enough to destroy any malware embedded in the DL model.
The differences between machine learning models and classic rule-based software require new ways to think about security threats.

While the Threat Matrix focuses on adversarial attacks, its methods are also applicable to threats such as EvilModels.
you’ve got the option to read the original articlehere.
Also tagged with

