Deepfakes are easier to create than ever and are being used to attack organizations, families and individuals.
Enter “Dr. Deepfake”, a fictional name for a very real problem. They operate quietly and are the symbol of how AI and generative AI are helping cybercriminals impersonate voices, faces and full identities to deceive professionals and society.
For the second week of Cybersecurity Awareness Month, we’ll dig a little deeper into AI-powered phishing emails, scams and deepfakes.The goal: help ensure your users are well-versed in these threats both as they go about their work life and explore the internet in their down time.
Social Engineering’s New Weapon
There have been several instances of deepfakes attacking organizations, including CEO Impersonation, such as Arup, Wiz and Ferrari. In a recent Wall Street Journal article on CEO Deepfakes[1], the attack pattern has shifted significantly, with the percentage of organizations aware of deepfakes increasing from 10% to 50% experiencing deepfake attacks. While organizations may not be sharing attacks based on deepfakes, we know the popularity of using them in social engineering is increasing.
We’ve already seen threat groups like Scattered Spider rely on social engineering to breach major companies. While it’s not known if they’re using deepfakes, they’ve shown how effective impersonation and urgency are in breaching security layers. It’s only a matter of time before they start layering in audio or video to be convincing enough to pass as the real deal.
Trust Becomes a Liability
This attack vector is changing the rules. No longer can we assume a video call is legitimate just because the face matches. No longer can we rely on voice alone to confirm identity. When AI-generated content enters the mix, trust becomes a liability.
Building Skepticism into Security
Security awareness is evolving to human risk management. With deepfakes, it’s not about paranoia, it’s about professional skepticism. Here are some tips to build awareness of deepfakes into your Cybersecurity Awareness Month efforts and beyond:
- Teach your teams to pause, to question and to verify
- Double-check voice messages that sound urgent, video calls that seem too perfectly timed, or requests from executives that break protocol
- Verification through secondary channels, like a known phone number or face-to-face confirmation, needs to become routine
Awareness as the First Line of Defense
Dr. Deepfake represents the shift we’re facing. It’s no longer about whether someone clicked a phishing link. It’s about how easily someone can be tricked into believing what they see and hear. We can’t train people to stop believing their eyes and ears, but we can train them to confirm before they act.
Technology will eventually catch up to detect deepfakes, but for now, people remain our best defense. Awareness isn’t optional, it’s essential. And in this new era, verifying before trusting might be the skill that stops the next significant breach.