Technology

Researchers present that malware will be hidden in AI fashions

researchers-present-that-malware-will-be-hidden-in-ai-fashions

Enlarge / An application for Boston University is hidden in this photo. The technique introduced by Wang, Liu, and Cui could hide data in an image classifier rather than just an image.

Researchers Zhi Wang, Chaoge Liu and Xiang Cui published a paper last Monday demonstrating a new technique that allows malware to bypass automated detection tools – in this case by hiding it in a neural network.

The three embed 36.9 MiB malware in a 178 MiB AlexNet model without significantly changing the function of the model itself. The model embedded in malware classifies images with almost identical accuracy, within 1% of the malware-free model. (This is possible because the number of layers and the total number of neurons in a convolutional neural network are set prior to training – which means that, much like in the human brain, many of the neurons in a trained model are either largely or completely inactive .)

Equally important is that the malware was broken into the model in such a way that it was prevented from being detected by standard anti-virus engines. VirusTotal, a service that “scans items with over 70 anti-virus scanners and URL / domain block listing services, in addition to a variety of tools to extract signals from the content it inspects,” did not raise any suspicions about the malware-embedded model .

The researchers’ technique selects the best layer to work in an already trained model and then embeds the malware in that layer. In an existing trained model – for example a widely used image classifier – there can be an undesirably large influence on the accuracy, since there are not enough dormant or predominantly dormant neurons.

advertising

If the accuracy of a model embedded in malware is insufficient, the attacker can instead start with an untrained model, add many additional neurons, and then train the model with the same data set that the original model used. This should produce a larger size but equal accuracy model, and the approach allows more room to hide nasty things in it.

The good news is we’re basically just talking about steganography – the new technique is a way to hide malware instead of running it. In order to actually execute the malware, it has to be extracted from the poisoned model by another malicious program and then reassembled into its working form. The bad news is that neural network models are significantly larger than typical photographic images and allow attackers to hide far more illegal data in them without being detected.

Cyber ​​security researcher Dr. Lukasz Olejnik told Motherboard that he didn’t think the new technology would offer an attacker much. “Today it wouldn’t be easy for antivirus software to spot it, but that’s just because nobody is looking.” But technology represents another way of smuggling data potentially bypassing digital guards into a potentially less protected internal network.

0 Comments