Human brainpower is no match for hackers unleashing digital smash-and-grab attacks powered by artificial intelligence using email hoaxes. As a result, cyber security protection must be guided by AI solutions that know hackers’ strategies better than they do.

This approach to fighting AI with better AI emerged as an ideal strategy in research conducted in March by cyber firm Darktrace to sniff out insights into human behavior around email. The survey reaffirmed the need for new cyber tools to combat AI-driven hacker threats targeting businesses.

The study sought a better understanding of how employees react to potential security threats globally. It also underscored their growing knowledge of the need for better email security.

Darktrace’s global survey of 6,711 employees in the US, UK, France, Germany, Australia and the Netherlands found respondents experienced a 135% increase in “new social engineering attacks” across thousands of active Darktrace email subscribers from January to February 2023 . The results were consistent with the widespread adoption of ChatGPT.

These novel social engineering attacks use sophisticated linguistic techniques, including increasing the amount of text, punctuation, and sentence length without any links or enclosures. The trend suggests that generative AI, such as ChatGPT, is providing an opportunity for threat actors to devise sophisticated and targeted attacks at speed and scale, according to the researchers.

According to Max Heinemeier, Chief Product Officer of Darktrace, one of the three most important findings from the research is that most employees are concerned about the threat of AI-generated emails.

“This is not surprising, as these emails are often indistinguishable from legitimate communications and some of the signs that employees commonly look for a ‘fake’ include signs such as poor spelling and grammar, which may be helpful in bypassing chatbots. Proving to be extremely efficient.” told TechNewsWorld.

Research Highlights

Darktrace asked retail, catering and leisure companies how concerned they are if hackers could use generative AI to create scam emails that are indistinguishable from real communications. Eighty-two percent said they are worried.

More than half of all respondents indicated their awareness of what employees think is an email that is a phishing attack. The top three included invitations to click on a link or open an attachment (68%), unknown senders or unexpected content (61%), and poor use of spelling and grammar (61%).

This is significant and troubling, as 45% of Americans surveyed noted that they had been the victim of a fraudulent email, according to Heinemeyer.

“It is unsurprising that employees are concerned about their ability to verify the legitimacy of email communications in a world where AI chatbots are increasingly able to mimic real-world conversations and generate emails that contain phishing attack information.” All the usual signs are lacking, such as malicious links or attachments,” he said.

Other key results of the survey include the following:

  • 70% of global employees have seen an increase in the frequency of scam emails and texts over the past six months
  • 87% of global workers are concerned about the amount of personal information about themselves available online that could be used in phishing and other email scams
  • 35% of respondents have tried ChatGPT or other general AI chatbots

human error guardrail

The wider reach of generative AI tools like ChatGPT and the increasing sophistication of nation-state actors means email scams are more credible than ever, noted Heinemeyer.

Innocent human error and threats from within remain an issue. Misdirecting an email is a risk for every employee and every organization. Nearly two out of five people have sent an important email to the wrong recipient with a similar-looking surname, either by mistake or because of autocomplete. This error rises to more than half (51%) in the financial services industry and 41% in the legal sector.

Regardless of the fault, such human errors add another layer of security risk that is not malicious. A self-learning system can spot this error before sensitive information is shared incorrectly.

In response, Darktrace unveiled a significant update to its globally deployed email solution. This helps strengthen email security tools as organizations continue to rely on email as their primary collaboration and communication tool.

“Email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against email threats,” he said.

Darktrace’s latest email capability includes behavioral detection for misdirected emails that prevent intellectual property or confidential information from being sent to the wrong recipient, according to Heinemeyer.

AI Cyber ​​Security Initiative

By understanding what’s normal, AI security can determine what doesn’t belong in a particular person’s inbox. Email protection systems often get it wrong, with 79% of respondents saying their company’s spam/security filters wrongfully block important legitimate email from reaching their inboxes.

With a deep understanding of the organization and how the individuals within it interact with their inbox, AI can determine for each email whether it is suspicious and should be acted upon or if it is legitimate and should be left untouched.

“Tools that work from knowledge of historical attacks will be no match for AI-generated attacks,” Heinemeyer offered.

Analysis of the attack shows significant linguistic deviations – both semantically and syntactically – compared to other phishing emails. This leaves little doubt that traditional email security tools, which operate from knowledge of historical threats, will fall short in picking up on the subtle indicators of these attacks, he explained.

Reinforcing this, research from Darktrace has shown that email security solutions, which include native, cloud and static AI tools, take an average of 13 days from the time a victim is attacked until the breach is detected.

“That leaves defenders vulnerable for about two weeks if they rely solely on these tools. AI defense that understands the business will be critical to detecting these attacks,” he said.

Need for AI-Human Partnership

Heinemeyer believes that the future of email security lies in a partnership between AI and humans. In this arrangement, algorithms are responsible for determining whether a communication is malicious or benign, thereby shifting the burden of responsibility away from humans.

“Training on good email security practices is important, but will not be enough to stop AI-generated threats that look like perfectly benign communications,” he warned.

One of the revolutions AI is enabling in the email space is a deeper understanding of “you”. Rather than trying to predict attacks, your understanding of employees’ behavior should be determined based on their email inbox, their relationships, tone of voice, emotions and hundreds of other data points, he argued.

“By leveraging AI to address email security threats, we not only mitigate risk but revitalize organizational trust and contribute to business outcomes. In this scenario, humans are freed up to operate at higher level, more strategic practices,” he said.

Not an insurmountable cyber security problem

The threat of offensive AI on the defensive side has been researched for a decade. Attackers will inevitably use AI to enhance their operations and maximize ROI, noted Heinemeyer.

“But it’s not something we would consider impossible from a defense perspective. The irony is that generative AI may screw up the social engineering challenge, but AI that knows you can parry,” he predicted.

Darktrace tests aggressive AI prototypes against the company’s technology to continually test the efficacy of its defenses in advance of this inevitable evolution in the attack landscape. The company is confident that AI coupled with deep business understanding will be the most powerful way to combat these threats as they continue to evolve.

Nearly all the top 10 universities in the United States, United Kingdom and Australia are putting their students, faculty and staff at risk of compromising email by failing to prevent attackers from spoofing the email domains of schools.

Universities in the United States are most at risk with the worst levels of security, followed by the United Kingdom, then Australia, according to a report released Tuesday by enterprise security company Proofpoint.

The report is based on an analysis of Domain-Based Message Authentication, Reporting and Conformance (DMARC) records in schools. DMARC is a nearly decade old email verification protocol used to authenticate the domain of an email message before it reaches its destination.

The protocol provides three levels of protection – Monitor, Quarantine, and the strongest level, Deny. The report found that none of the country’s top universities had a disallowed level of security enabled.

“Higher education institutions hold a greater proportion of sensitive personal and financial data, perhaps more than any industry outside of healthcare,” Ryan Kalember, Proofpoint’s executive vice president of cybersecurity strategy, said in a statement.

“Unfortunately, this makes these institutions a highly attractive target for cybercriminals,” he continued. “The pandemic and rapid changes in distance learning have further increased cybersecurity challenges for tertiary education institutions and open them up to significant risks from malicious email-based cyberattacks such as phishing.”

Barriers to Adoption of DMARC

Universities are not alone in poor DMARC implementation.

A recent analysis of 64 million domains globally by Red Sift, a London-based manufacturer of an integrated email and brand protection platform, found that only 2.1 percent of domains had implemented DMARC. Furthermore, only 28% of all publicly traded companies in the world have fully implemented the protocol, while 41% have only enabled its basic level.

There can be many reasons for not adopting DMARC by an organization. “There may be a lack of awareness of the importance of implementing DMARC policies, as well as companies not fully aware of how to begin implementing the protocol,” said Ryan Witt, Proofpoint Industries Solutions and Strategy Leader. Explained.

“Additionally,” he continued, “the lack of government policy to mandate DMARC as a requirement may be a contributing factor.”

“Further, with the pandemic and the current economy, organizations are struggling to change their business models, so competing priorities and lack of resources are also likely factors,” he said.

Installing the technology can also be challenging. Craig Lurey, CTO and co-founder of Keeper Security, a provider of zero-trust and zero-knowledge cybersecurity software in Chicago, explained, “This requires the ability to publish DNS records, which requires experience in systems and network administration. is needed.”

Furthermore, he told TechNewsWorld: “Many layers of setup are necessary to implement DMARC properly. This needs to be closely monitored during the implementation and rollout of the policy to ensure that legitimate email is not being blocked. ,

no bullets for spoofing

Nicole Hoffman, a senior cyber threat intelligence analyst at Digital Shadows, a provider of digital risk protection solutions in San Francisco, agreed that implementing DMARC can be a daunting task. “If implemented incorrectly, it can break things and disrupt business operations,” she told TechNewsWorld.

“Some organizations hire third parties to assist with implementation, but this requires financial resources that need to be approved,” she said.

He cautioned that DMARC will not protect against all forms of email domain spoofing.

“If you receive an email that appears to be from Bob on Google, but the email actually originated from Yahoo Mail, DMARC will detect it,” she explained. “However, if a threat actor registers a domain similar to that of Google, such as Google3, DMARC will not detect it.”

Unused domains can also be a way to avoid DMARC. “Domains that are registered but unused are also prone to email domain spoofing,” Luray explained. “Even when organizations have implemented DMARC on their primary domains, failing to enable DMARC on unused domains makes them potential targets for spoofing.”

Unique challenges of universities

Universities can have their own difficulties when it comes to implementing DMARC.

“Many times universities don’t have a centralized IT department,” Brian Westnage, Red Sift senior director of global channels, told TechNewsworld. “Each college has its own IT department operating in silos. This can make it a challenge to implement DMARC across the organization as everyone is doing something different with email. ,

Witt said the ever-changing student population at universities, coupled with a culture of openness and information-sharing, can often conflict with the rules and controls needed to effectively protect users and systems from attack and compromise.

In addition, he continued, many educational institutions have an affiliated health system, so they need to comply with the controls associated with a regulated industry.

Funding at universities could also be an issue, noted John Bumbank, the principle threat hunter of Netenrich, a San Jose, Calif.-based IT and digital security operations company. “The biggest challenge for universities is under-funding of security teams – if they have one – and under-funding of IT teams in general,” he told TechNewsWorld.

“Universities don’t pay particularly well, so part of it is the knowledge gap,” he said.

“Many universities have a culture against enforcing any policies that may hinder research,” he said. “When I worked at a university 15 years ago, there were knock-down drag-out fights against the mandatory antivirus on workstations.”

costly problem

Mark Arnold, vice president of advisory services at LARES, an information security consulting firm in Denver, noted domain spoofing is a significant threat to organizations and the technology of choice for threat actors to impersonate businesses and employees.

“Organizational threat models must account for this prevalent threat,” he told TechNewsWorld. “Implementing DMARC helps organizations filter and validate messages and thwart phishing campaigns and other commercial email agreements.”

Business email agreement (BEC) is probably the most costly problem of all cyber security, maintained Witt. According to the FBI, BEC thieves lost $43 billion between June 2016 and December 2021.

“Most people don’t realize how exceptionally easy it is to spoof email,” Witt said. “Anyone can send a BEC email to an intended target, and there is a high probability of it getting through, especially if the impersonated organization is not authenticating their email.”

“These messages often do not contain malicious links or attachments, bypassing traditional security solutions that analyze messages for these traits,” he continued. “Instead, emails are sent only with text designed to prepare the victim to act.”

“Domain spoofing, and its cousin typosquatting, are some of the lowest-hanging fruits for cybercriminals,” Bumbenek said. “If you can get people to click on your email because it looks like it’s coming from their own university, you’ll get a higher click-through rate and, by extension, more fraud damages, stolen credentials and more.” See you successful cybercrime.”

“In recent years,” he said, “attackers have been stealing students’ financial aid refunds. There is a lot of money to be made by criminals here.”