Did Emma Watson, the actor who portrayed Hermoine in the Harry Potter movies, record audio clips of her reading the controversial book Mein Kampf? No, not unless you believe everything you hear.
Unfortunately, we no longer have that luxury, not since the arrival of deepfake audio. This artificial intelligence technology won't just rattle your impression of the talented Watson. It's also a potential security risk.
If you use voice authentication (such as for bank services), frequently direct staff with voice notes, or are concerned that a rogue recording can damage your reputation, career or company, you should tune your ears into this issue.
What is deepfake audio?
Deepfakes are high-quality impersonations of someone, a forgery that can dupe others into thinking they are real. Software systems can generate convincing forgeries of images and videos. But audio deepfakes may represent the biggest threat to business security.
A research study published recently in the journal PLOS ONE found that deepfaked voices fooled humans 25 percent of the time. If that seems low, consider that the technology is still maturing and will improve. Not to mention that a one-out-of-four hit rate is more than enough for cybercriminals.
Deepfake technology is also very accessible—the Emma Watson fakes were created using a publicly available tool.
Are you at risk?
Should you be concerned? It depends.
Deepfaked voices belong in that category of highly targeted attacks. In other words, there is not much risk (yet) of criminals using such fakes in scattershot attacks. Even though deepfake tools are available, developing sufficiently convincing fake voices still takes time and resources. But it's not impossible, and a motivated criminal can use these tools for various nefarious identity impersonations.
The FBI warned about this threat in 2021, saying online criminals "almost certainly will leverage synthetic content for cybercrime and foreign influence operations in the next 12-18 months." The year before that, thieves successfully stole $35 million from a Hong Kong bank by using deepfaked voice calls.
What risks do deepfake voice attacks pose?
Access fraud: deepfake voices can (and have) fool voice authentication systems.
Impersonation: criminals use deepfakes voice notes and phone calls to instruct unsuspecting employees.
Business identity compromise (BIC): deepfake voices can be part of a phishing strategy to steal credentials, multi-factor tokens, and session tokens.
Blackmail: Using fake voices, criminals can pressure executives, often compelling their involvement in a larger cyberattack.
Misinformation: well-timed voice notes can impact company reputations, stock prices, and negotiations.
Fighting back
What are the options to reduce these risks? Determine where your organisation is vulnerable to deepfake voice attacks:
How frequently do personalised attacks, such as spear-phishing, target your people (or people in your sector)?
Do any of your authentication protocols rely on voice?
Do your executives use voice notes?
Do they operate in a multi-region structure where many decisions are made across phone calls?
Does any part of your security environment rely on voice authentication?
Close those gaps with training, security process improvements, and incident response planning.
Deepfake technology is still relatively new and evolves fast. The same goes for deepfake detection tools. Managed detection partners are the best bet to access detection capabilities sooner yet affordably. They are motivated to invest in such developments and can use multi-tenant scale to lower costs.
But there isn't a reason to panic. Deepfake voices might be a potent tool for cybercrime, yet they are still sporadic and require considerable effort and resources. Most cyberattacks are opportunistic and scattershot. Your organisation is still far more likely to be attacked through email phishing, a lack of basic security precautions, or business email compromise.
So, no need to run for the hills. Deepfake voices are still a rare part of cyberattacks. Yet, this is also an opportunity to get ahead of the threat. Are you or our organisation vulnerable to deepfake voice attacks? It's a question worth asking and answering.
Comments