Human brainpower is no match for hackers unleashing digital smash-and-grab attacks powered by artificial intelligence using email hoaxes. As a result, cyber security protection must be guided by AI solutions that know hackers’ strategies better than they do.
This approach to fighting AI with better AI emerged as an ideal strategy in research conducted in March by cyber firm Darktrace to sniff out insights into human behavior around email. The survey reaffirmed the need for new cyber tools to combat AI-driven hacker threats targeting businesses.
The study sought a better understanding of how employees react to potential security threats globally. It also underscored their growing knowledge of the need for better email security.
Darktrace’s global survey of 6,711 employees in the US, UK, France, Germany, Australia and the Netherlands found respondents experienced a 135% increase in “new social engineering attacks” across thousands of active Darktrace email subscribers from January to February 2023 . The results were consistent with the widespread adoption of ChatGPT.
These novel social engineering attacks use sophisticated linguistic techniques, including increasing the amount of text, punctuation, and sentence length without any links or enclosures. The trend suggests that generative AI, such as ChatGPT, is providing an opportunity for threat actors to devise sophisticated and targeted attacks at speed and scale, according to the researchers.
According to Max Heinemeier, Chief Product Officer of Darktrace, one of the three most important findings from the research is that most employees are concerned about the threat of AI-generated emails.
“This is not surprising, as these emails are often indistinguishable from legitimate communications and some of the signs that employees commonly look for a ‘fake’ include signs such as poor spelling and grammar, which may be helpful in bypassing chatbots. Proving to be extremely efficient.” told TechNewsWorld.
Research Highlights
Darktrace asked retail, catering and leisure companies how concerned they are if hackers could use generative AI to create scam emails that are indistinguishable from real communications. Eighty-two percent said they are worried.
More than half of all respondents indicated their awareness of what employees think is an email that is a phishing attack. The top three included invitations to click on a link or open an attachment (68%), unknown senders or unexpected content (61%), and poor use of spelling and grammar (61%).
This is significant and troubling, as 45% of Americans surveyed noted that they had been the victim of a fraudulent email, according to Heinemeyer.
“It is unsurprising that employees are concerned about their ability to verify the legitimacy of email communications in a world where AI chatbots are increasingly able to mimic real-world conversations and generate emails that contain phishing attack information.” All the usual signs are lacking, such as malicious links or attachments,” he said.
Other key results of the survey include the following:
- 70% of global employees have seen an increase in the frequency of scam emails and texts over the past six months
- 87% of global workers are concerned about the amount of personal information about themselves available online that could be used in phishing and other email scams
- 35% of respondents have tried ChatGPT or other general AI chatbots
human error guardrail
The wider reach of generative AI tools like ChatGPT and the increasing sophistication of nation-state actors means email scams are more credible than ever, noted Heinemeyer.
Innocent human error and threats from within remain an issue. Misdirecting an email is a risk for every employee and every organization. Nearly two out of five people have sent an important email to the wrong recipient with a similar-looking surname, either by mistake or because of autocomplete. This error rises to more than half (51%) in the financial services industry and 41% in the legal sector.
Regardless of the fault, such human errors add another layer of security risk that is not malicious. A self-learning system can spot this error before sensitive information is shared incorrectly.
In response, Darktrace unveiled a significant update to its globally deployed email solution. This helps strengthen email security tools as organizations continue to rely on email as their primary collaboration and communication tool.
“Email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against email threats,” he said.
Darktrace’s latest email capability includes behavioral detection for misdirected emails that prevent intellectual property or confidential information from being sent to the wrong recipient, according to Heinemeyer.
AI Cyber Security Initiative
By understanding what’s normal, AI security can determine what doesn’t belong in a particular person’s inbox. Email protection systems often get it wrong, with 79% of respondents saying their company’s spam/security filters wrongfully block important legitimate email from reaching their inboxes.
With a deep understanding of the organization and how the individuals within it interact with their inbox, AI can determine for each email whether it is suspicious and should be acted upon or if it is legitimate and should be left untouched.
“Tools that work from knowledge of historical attacks will be no match for AI-generated attacks,” Heinemeyer offered.
Analysis of the attack shows significant linguistic deviations – both semantically and syntactically – compared to other phishing emails. This leaves little doubt that traditional email security tools, which operate from knowledge of historical threats, will fall short in picking up on the subtle indicators of these attacks, he explained.
Reinforcing this, research from Darktrace has shown that email security solutions, which include native, cloud and static AI tools, take an average of 13 days from the time a victim is attacked until the breach is detected.
“That leaves defenders vulnerable for about two weeks if they rely solely on these tools. AI defense that understands the business will be critical to detecting these attacks,” he said.
Need for AI-Human Partnership
Heinemeyer believes that the future of email security lies in a partnership between AI and humans. In this arrangement, algorithms are responsible for determining whether a communication is malicious or benign, thereby shifting the burden of responsibility away from humans.
“Training on good email security practices is important, but will not be enough to stop AI-generated threats that look like perfectly benign communications,” he warned.
One of the revolutions AI is enabling in the email space is a deeper understanding of “you”. Rather than trying to predict attacks, your understanding of employees’ behavior should be determined based on their email inbox, their relationships, tone of voice, emotions and hundreds of other data points, he argued.
“By leveraging AI to address email security threats, we not only mitigate risk but revitalize organizational trust and contribute to business outcomes. In this scenario, humans are freed up to operate at higher level, more strategic practices,” he said.
Not an insurmountable cyber security problem
The threat of offensive AI on the defensive side has been researched for a decade. Attackers will inevitably use AI to enhance their operations and maximize ROI, noted Heinemeyer.
“But it’s not something we would consider impossible from a defense perspective. The irony is that generative AI may screw up the social engineering challenge, but AI that knows you can parry,” he predicted.
Darktrace tests aggressive AI prototypes against the company’s technology to continually test the efficacy of its defenses in advance of this inevitable evolution in the attack landscape. The company is confident that AI coupled with deep business understanding will be the most powerful way to combat these threats as they continue to evolve.