Tag

Researchers

Browsing

For years the security industry has stressed the importance of strong passwords. Some recent research from Home Security Heroes clearly illustrates the value of that advice.

Using artificial intelligence, the crew at the Home Security Information and Reviews website cracks passwords in the four- to seven-character range either instantly or within a few minutes — even when the password contains numbers, upper and lowercase letters. Mixing happens, and symbols.

After feeding more than 15.6 million passwords into an AI-powered password cracker called PassGain, the researchers concluded that it is possible to crack 51% of common passwords in under a minute.

However, the AI ​​software faltered against longer passwords. A number-only password of 18 characters would take at least 10 months to crack, and a password that length with numbers, upper- and lower-case letters, and symbols would take six quintillion years to crack.

On the Home Security Heroes website, the researchers explain that PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from real password leaks and generate realistic passwords that hackers can exploit.

Domingo Guerra, Executive Vice President explained, “AI algorithms are continuously A/B tested against each other millions of times to stimulate learning, allowing human knowledge with microchips up to 100,000 times faster than the human brain.” Yoga happens.” Trusted for Encode Technologies, an international identity verification and biometric authentication company.

“Compared to traditional, brute force algorithms with limited capacity, AI predicts the most likely next figure,” he told TechNewsWorld. “Instead of acquiring knowledge externally, it leans into the patterns it has built up during its training to quickly display the behavior.”

Doubt on AI

Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative, observed that based on what has been publicly disclosed, the AI ​​uses techniques similar to rainbow table attacks, not just brute force a password. For. Hackers use rainbow tables to translate hashed passwords into plaintext.

“Rainbow tables allow AI to perform simple search and compare operations on hashed passwords, rather than a slow, brute force attack,” he told TechNewsWorld.

“Rainbow table attacks have been accepted for years and have been shown to crack even 14-character passwords in under five minutes,” he said. “Even older hashing algorithms such as MD5 and SHA-1 are susceptible to these types of attacks.”


Robert Hughes, chief information security officer at RSA, a cybersecurity company in Bedford, Mass., explained that most password cracking is done by first finding a hashed password and comparing it against it.

“In theory,” he continued, “an AI can learn more information about a subject and use it to do so in an intelligent way, but this is not proven in practice.”

“Security teams have been battling with brute force and rainbow tables for years now,” he said. “In fact, the PassGAN AI model does not perform much faster than the others which benefits the actors.”

AI’s Limitations

Clearwater, Fla. Roger Grimes, a defense campaigner at KnowBe4, a security awareness training provider in the U.S., is also not convinced that AI can crack passwords any faster than traditional methods.

“Probably it can, and certainly it will be able to in the future,” he told TechNewsWorld, “but no one has shown me a definitive test of any AI system today that does non-AI, traditional password guessing.” And breaks passwords faster than cracking.” ways.”

“As more and more people use password managers that generate truly random passwords, AI will have zero advantage over any traditional password cracking when the passwords involved are truly random, as they should already be.” Should be,” he said.

Security experts point out some limitations of using AI to crack passwords. For example, computing power can be a challenge. “Cracking longer and more complex passwords takes a significant amount of time — even by AI,” Childs said.

“It is also unclear how AI will fare against the salting mechanism used in some hashing algorithms,” he said.

There’s a big difference between generating a huge number of password guesses and being able to input those guesses in a real-world scenario, said John Gunn, CEO of Token, a maker of biometric-based wearable authentication Ring in Rochester, NY. .

“Most apps and systems have a low number of incorrect entries before locking out a hacker, and AI doesn’t change that,” he told TechNewsWorld.

long goodbye to passwords

Of course, one wouldn’t have to worry about AI cracking passwords if there were no passwords to crack. That doesn’t seem likely, at least in the near term, despite annual predictions about the end of passwords.

Darren Guccione, CEO of Keeper Security, a password management and online storage provider, said, “Over time, we take the annoyance out of password management by removing the clunky manual process of remembering and entering long strings of numbers and letters to gain access. are likely to do.” Company in Chicago.

“But given the billions of existing devices and systems that already rely on password protection, passwords will still be with us for the foreseeable future,” he told TechNewsWorld. “We can only provide stronger security to support their safe use.”


Grimes said there has been a movement to get rid of passwords since the late 1980s. “There are thousands of articles predicting the death of the password, and yet decades later, it’s still a struggle,” he said.

“If you put all non-password authentication solutions together, they won’t work on 2% of the world’s sites and services,” he continued. “It’s a problem, and it’s preventing widespread adoption.”

“On a good note, more people today use some form of non-password authentication to log on to one or more sites and services. The percentage is higher than ever before,” he said.

“But as long as the overall percentage of sites and services remains below 2%, the ‘tipping point’ for large-scale adoption of non-password authentication is becoming difficult,” he said. “It’s a frustratingly difficult real-world chicken-and-egg problem.”

Hughes acknowledged that legacy systems, as well as trust from users and administrators, have slowed the move away from passwords. However, he added: “Ultimately, passwords will be used sparingly, and they will be used mostly where they are appropriate or where systems cannot be updated to support other methods, but instead It will still take years to lock down passwords.” Most people and companies.

As if defenders of the software supply chain didn’t have enough attack vectors to worry about, they now have a new one: machine learning models.

ML models are at the heart of technologies such as facial recognition and chatbots. Like open-source software repositories, models are often downloaded and shared by developers and data scientists, so a compromised model can have effects on multiple organizations at once.

Researchers from machine language security company HiddenLayer revealed in a blog post on Tuesday how an attacker could use a popular ML model to deploy ransomware.

The method described by the researchers is similar to how hackers use steganography to hide malicious payloads in images. In the case of ML models, the malicious code is hidden in the model’s data.

According to the researchers, the steganography process is quite general and can be implemented on most ML libraries. He added that the process need not be limited to embedding malicious code in models and can also be used to extract data from an organization.

machine learning model hijacking

Image Courtesy of HiddenLayer


Attacks can also be operating system agnostic. The researchers pointed out that OS and architecture-specific payloads can be embedded in the model, where they can be loaded dynamically at runtime depending on the platform.

flying under the radar

Tom Bonner, senior director of adversarial threat research at Austin, Texas-based HiddenLayer, said that embedding malware in ML models provides some advantage to an adversary.

“It allows them to fly under the radar,” Bonner told TechNewsWorld. “This is not a technology that is detected by current antivirus or EDR software.”

“It also opens up new targets for them,” he said. “It’s a direct route into data scientist systems. It’s possible to dump machine learning models hosted on public repositories. Data scientists will pull it down and load it, then it’s patched.”

“These models are also downloaded to various machine-learning ops platforms, which can be very scary because they can have access to Amazon S3 buckets and steal training data,” he continued.

“most of [the] Machines running machine-learning models tend to have bigger, fatter GPUs, so bitcoin miners can be very effective on those systems as well,” he said.

HiddenLayer demonstrates how its hijacked pre-trained ResNet model executed a ransomware sample the moment it was loaded into memory by PyTorch on its test machine.


first mover advantage

Chris Clements, vice president of solutions architecture at Cerberus Sentinel, a cybersecurity consulting and penetration testing company in Scottsdale, Ariz., often likes to exploit unanticipated vulnerabilities in new technologies.

“Attackers looking for first-mover advantage in these frontiers can enjoy both less preparation and proactive protection by exploiting new technologies,” Clements told TechNewsWorld.

“This attack on machine-language models looks like it could be the next phase of the cat-and-mouse game between attackers and defenders,” he said.

Threat actors will take advantage of whatever vectors they can to carry out their attacks, explained Mike Parkin, senior technical engineer at Vulkan Cyber, a provider of SaaS for enterprise cyber risk remediation in Tel Aviv, Israel.

“It’s an unusual vector that can outperform some common tools if done carefully,” Parkin told TechNewsWorld.

Traditional anti-malware and endpoint detection and response solutions are designed to detect ransomware based on pattern-based behaviors, including virus signatures and monitoring key API, file, and registry requests on Windows for potential malicious activity , Chief Security Officer Morey Haber explained. BeyondTrust, a developer of privileged account management and vulnerability management solutions in Carlsbad, California.

“If machine learning is applied to the delivery of malware such as ransomware, traditional attack vectors and even detection methods can be changed to appear non-malicious,” Haber told TechNewsWorld.

potential for extensive damage

Attacks on machine-language models are on the rise, said Karen Crowley, director of product solutions at Deep Instinct, a deep-learning cybersecurity company in New York City.

“It’s not critical yet, but widespread damage is likely,” Crowley told TechNewsworld.

“In the supply chain, if the data is poisoned so that when the model is trained, the system is also poisoned, then that model can make decisions that reduce rather than strengthen protection,” he explained.

“In the cases of Log4j and SolarWinds, we saw an impact not only on the organization that has the software, but all of its users in that chain,” she said. “Once ML is introduced, the damage can add up quickly.”

Casey Ellis, CTO and founder of BugCrowd, which operates a crowdsourced bug bounty platform, said attacks on ML models could be part of a larger trend of attacks on software supply chains.

Ellis told TechNewsWorld, “Just as adversaries can attempt to compromise the supply chain of software applications to insert malicious code or vulnerabilities, they can also compromise the supply chain of machine learning models to insert malicious or biased data or algorithms.” can also target.

“This can have a significant impact on the reliability and integrity of AI systems and can be used to undermine trust in the technology,” he said.

Publam for Script Kiddies

Threat actors may show increased interest in machine models because they are more vulnerable to people than they thought.

“People have known this was possible for a while, but they didn’t realize how easy it was,” Bonner said. “It’s fairly trivial to put together an attack with a few simple scripts.”

He added, “Now that people have realized how easy it is, this script is in the realm of children.”

Clements agreed that the researchers have shown that it does not require hardcore ML/AI data science expertise to insert malicious commands into training data that can then be triggered by ML models at runtime.

However, he continued, more sophistication is required than run-of-the-mill ransomware attacks that rely primarily on simple credential stuffing or phishing to launch.

“Right now, I think the popularity of the specific attack vector is likely to subside for the foreseeable future,” he said.

“Exploiting this requires an attacker compromising the upstream ML model project used by downstream developers to download pre-trained ML models to the victim, with embedded malicious commands from an unauthenticated source.” exploits,” he explained.

“In each of these scenarios,” he continued, “it appears that there would be much easier and more straightforward ways to compromise the target than simply inserting entangled exploits into the training data.”