Business email compromises, which supplanted ransomware last year to become the top financially motivated attack vector-threatening organizations, are likely to become harder to track. New investigations by Abnormal Security suggest attackers are using generative AI to create phishing emails, including vendor impersonation attacks of the kind Abnormal flagged earlier this year by the actor dubbed Firebrick Ostricth.
According to Abnormal, by using ChatGPT and other large language models, attackers are able to craft social engineering missives that aren’t festooned with such red flags as formatting issues, atypical syntax, incorrect grammar, punctuation, spelling and email addresses.
The firm used its own AI models to determine that certain emails sent to its customers later identified as phishing attacks were probably AI-generated, according to Dan Shiebler, head of machine learning at Abnormal. “While we are still doing a complete analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks with AI indicators as a percentage of all attacks, particularly over the past few weeks,” he said.