AI maturation is leading to more malicious hacking attacks.
Like thousands of cybersecurity thought leaders, I’ve been speaking about AI being used maliciously since OpenAI released ChatGPT in November 2022. I’m far from alone. The entire cybersecurity industry has been warning about it nonstop. We’ve known that as AI progresses, attackers would use those same productivity features, thereby harming us.
Until just a few months ago, when I spoke about the coming wave of AI attacks, I followed it up with, “Although AI attacks are coming, how you are likely to be compromised today will not include AI.” I changed that a few months ago, and I now say, “How likely are you to be attacked by AI, and by the end of 2026, most hacking attacks will be driven by AI.”
What changed my mind?
AI services have matured, and hackers have increasingly adopted those improvements into their own tools and methods. Today, most hacking tools and phishing kits are incorporating AI. And that AI will allow those hackers to be more pervasive, faster and successful.
Maturity of AI Over Time That Has Allowed Malicious Hacking To Accelerate
The maturity of AI has been far faster than any other industrial revolution. No previous industry transformation has ever been as fast and sweeping. Here are the crucial improvements in AI technology that have allowed malicious hacking to quickly accelerate over time.
LLM-Created Phishing Messages
The very first large language model (LLM) chatbots, like ChatGPT, Claude, Gemini and Microsoft CoPilot, quickly proved that social engineers could use them to craft far better-looking and sounding phishing messages. In the past, most online scammers lived in countries different from their victims (to avoid direct legal consequences) and didn’t speak the victim’s language as well as the victim.
Hence, most phishing messages contained typos and grammatical errors. In fact, typos and grammar errors were one of the most common signs and symptoms of phishing, taught by all cybersecurity defenders, including KnowBe4.
AI chatbots allow attackers to craft very realistic-looking phishing messages, in nearly any language, without grammar errors and typos. The messages would contain more realistic country and industry jargon, and the scammer could more realistically respond to any potential victim inquiry. In the past, if a victim asked a question to the scammer, the scammer, because they didn’t naturally speak the language of the victim, lived in their country, or worked in their industry, couldn’t provide great responses. Shorter was better for the attacker. But it also limited the effectiveness of the response in helping the scammer to scam.
AI chatbots changed all of that. We no longer advise people to look out for typos and grammatical errors as the primary sign of phishing.
How I wish that were all we had to worry about.
Generative AI Fake Pictures
Gen AI showed that realistic and believable pictures of real or fake (i.e., synthetic) people (and other objects) could be created. To this day, when Gen AI creates a fake image or simulated picture using some prompt inputs I’ve typed in, I’ve never failed to be impressed (even when it comes up with impossible pictures and hallucinations).
For the hacker, the ability to create a fake picture of anyone doing something or appearing a certain way was a great start to the world of “AI deepfakes.” Now, an attacker could at least create a picture of a person, object or event, and use an LLM chatbot to create an even more realistic phishing scenario. They could create fake pictures of real people doing things they really didn’t do or create fake pictures of fake people. Either way, a picture is worth a thousand words, and if you throw that in with some fake messaging, it will fool more people.
Voice Cloning
About the same time that Gen AI allowed us to create fake pictures, AI enabled us to record or take short audio clips of anyone and then have that AI generate an audio feed of that person saying and doing anything. Initially, the AIs needed at least a few minutes of audio of that person to successfully clone that voice, but now it can be done with as little as a few seconds of that person’s voice. I’ve successfully cloned real voices using audio clips as short as 6 seconds.
Now, attackers could create audio clips of real people and use them in vishing (voice-based phishing) attacks. They could pretend to be any person asking or saying anything and call a potential victim to produce a desired outcome. Most of the time, the voice-cloning criminal would leave a voicemail or online voice message because the fake voices had to be made up ahead of time.
Video Generation
The step forward that really changed things was when AI allowed anyone to create a video, with audio, of anyone else. All you have to do is upload a person’s picture along with some audio recording of their voice, and voila! You can have a video with audio of anyone saying and doing anything.
If you want to create your own deepfakes or experiment with them and don’t have any other guidance, check out my recommended deepfake pathway: Step-by- Step to Creating Your First Realistic Deepfake Video in a Few Minutes.
This was a huge game-changer. Now, a criminal could easily create hyper-realistic audio and video of anyone saying and doing anything. This occurred in early 2024. This ability started an immediate use by hackers in crafting fake videos (often of bosses and CEOs), directing their employees to do something.
Some early (i.e., 2024) examples of real-life crimes committed using fake audio and video clips are:
These early examples were not super popular and were focused on very high-value potential gains. AI technology wasn’t yet at the point where every hacker and their tool could easily do it. But the biggest blocker was the fact that hackers had to create and pre-record the audios and videos, and they couldn’t easily respond to any inquiries by the potential victim being scammed.
For example, in the $25M heist linked to above, the victim was asked to get on a Zoom call, which he thought contained his co-workers and boss. He was asked to transfer $25M immediately, outside company policies, so that a big potential business opportunity would not be lost. No one in the Zoom call was real except for him.
But when he asked a question, because the hacker didn’t have a pre-recorded video clip to play in response to his specific question, the hacker simply abruptly ended the Zoom call and then sent a message to the victim that he, the boss, had Internet connectivity problems and they could no longer use Zoom. The scammer then convinced the victim that he had the details he needed and to follow the previously agreed-upon instructions, which he did.
That was back in 2024, when AI couldn’t respond to questions in real-time. That’s not a problem anymore.
Real-Time Gen AI Videos
I remember watching with my co-workers, Perry Carpenter and James McQuiggan, in early 2025, as we watched a video of a brand-new Chinese website where anyone could simply click on any “persona” and become anyone else in real-time. You could click on Taylor Swift, and anything you said or did appeared as Taylor Swift. You could click on Nicholas Cage, and whatever you said and did appeared as Nicholas Cage. Well, the image appeared as that person, but the voice used was of the person talking (and not the persona).
Here are good demos of the technology by Perry Carpenter:
Attackers could now respond in real-time to any asked victim question, although attackers would have to simulate the image’s voice close enough to fool the victim. It didn’t have to be perfect, but if the attacker was a husky-voiced male, they couldn’t appear as Taylor Swift, for example.
Real-Time Gen AI Videos With Simulated Voices
What happened next was that the AI could now respond in real-time with the simulated video and voice of the person being faked. This was a huge game-changer. Now, the attacker could picture whatever person they wanted to be (celebrity, boss, relative, etc.) and become that person in real-time. Whatever the hacker said was now presented to the potential victim as if it were coming from that person.
Again, we first saw that type of functionality on a Chinese website. Then, we saw similar functionality in a US ethical hacker’s website that he had created to showcase real-time AI. Within a month, we saw an online cloud service that could be used to do it, quickly followed by software that could be downloaded to a powerful laptop to do the same thing. More capable software, often free, followed.
Real-Time Deepfake AI-Driven Responses
This was the biggest game-changer for hackers and is still improving. This is what occurred a few months ago that led me to change my “don’t worry about AI” statement into a “You need to worry about AI!” declaration.
Basically, AI LLM chatbots have been combined into real-time fake people videos and audio. An attacker can prompt an AI chatbot to impersonate anyone else and respond in a particular way according to the hacker’s prompts. The AI is essentially, on its own, after the human prompting and initialization, doing the scam. It’s deciding what to say, ask, and how to respond. Most of what it does is not in the prompt.
Here is Perry’s blog on the subject.
In this example from a presentation to the SEC, Perry demonstrates an AI-enabled chatbot that pretends to be Taylor Swift, renamed as Brenda MacKey, working in IT for Hilton. He instructed the AI to maliciously social engineer potential victims out of personal information. To watch Perry’s example is absolutely terrifying. You are seeing the future of social engineering hacking, and you know it.
AI Social Engineering Is Better Than Humans
Here’s the kicker. AI-created social engineers are better than humans at tricking people out of information. A few years ago, in contests, they pitted AI against human social engineers, the humans easily won. Last year, the humans still won, but AI almost won and was so close that people watching the contest applauded the AI. This year, the AI won, and it wasn’t even close.
AI-enabled social engineering chatbots are 24.8% more likely to get personal information from a human than a human. That’s it. The last remaining Rubicon has been crossed.
This was realized at the beginning of 2025. Putting all this AI-enabled technology into hacking tools and methodologies will take about a year. That’s why I’ve been saying that by 2026 most hacking will be AI-enabled.
Our kids and grandkids will likely not associate hacking with humans. To them, hacking will be what the agentic AI bots do, and humans are just along for the ride (and to initiate the bots).
Wait, did I say agentic AI?
Agentic AI
I did. Most software and services will become agentic AI. What does that mean? Well, it means that traditional software, which normally functions in a deterministic way (same output for the same input), is going to be re-coded as agentic AI, where outputs can be different.
The agentic portion is somewhat like how we build houses in the real world. We could all build our own house…albeit probably not as great as dedicated professionals could. And for that reason, when we build houses, we hire all the right people. We use a professional architect’s building plans, and we hire a general contractor who manages all the other people who work on the house, including concrete people, painters, construction workers, flooring specialists, plumbers, electricians, etc. All those specialists work independently of the general contractor and can do all types of jobs they’ve been hired to do.
In the AI world, the general contractor is called the AI orchestrator. The orchestrator gets selected by the human, given a goal, and works with other cooperating agents toward a larger common goal, like building a house, hacking a company, or defending a company against an AI agentic threat.
Everything is going agentic AI, both threats and defenses.
Here’s a longer article on agentic AI.
And AI is not just enabling social engineering. All hacking is going to be AI-enabled.
Hackbots
AI tools and bots are also now taking over finding and exploiting vulnerabilities, both patched and unpatched. HackerOne, a community of cybersecurity researchers who look for and find software and firmware vulnerabilities, announced in February 2025 that an AI tool, called XBOW, had found more vulnerabilities than humans for that month’s contest. It was the first time.
Now, AI tools are finding so many vulnerabilities that the hacking contest tracking and reward system has been split between AI tools and traditional humans, so the humans won’t feel so bad.
And this summer, Google announced that its AI bot, Big Sleep, had found multiple zero-days. It not only found zero days but was able to identify one that was likely to soon be exploited by malicious hackers.
We are already seeing the first instances of agentic AI being used by hackers to do bad things. On August 27, 2025, ESET announced the first ransomware attack using AI. On the same day, Anthropic announced that AI has been used to compromise 17 companies “to automate almost the entire crime spree.” These are just the first instances of what will become a flood over the next 12 months.
That’s it! It’s good guys’ AI bots versus bad guys’ AI bots, and the best algorithms will win. This is the future of all cybersecurity from here on out.
Again, I predict this will be the status quo…normal…just the way things are…by the end of 2026. If I’m off by timing, it won’t be by much. I’ll be more right than wrong.
I previously wrote an article on AI attacks here.
Defenses
But not all is lost. Remember, for the first time in cybersecurity, the good guys are figuring out how the new technology will be abused by the bad guys and figuring out agentic AI-enabled defenses to defeat malicious hacking.
I actually have hope for the first time in my 38-year cybersecurity career that the good guys will figure this out better than the bad guys. The good guys invented AI. The good guys have been spending more time and resources on AI. The good guys understand AI better than the bad guys.
Every cybersecurity company, including KnowBe4, has been developing agentic AI-enabled defenses to defeat hackers and their malware for years. KnowBe4 has been working on AI defenses for over 10 years and has been mostly only doing agentic AI defenses for the past 3 years. It’s all AI, AI, AI!
And we are seeing dividends. The customers who use our AI products (called AIDA) are seeing far better outcomes than if they just used human admins and configuration. We know, for example, that if you let AIDA pick your co-workers’ simulated phishing templates to test users, that the users will “fail” more simulated phishing tests, which means they get more anti-phishing education, which means they are less at risk for real phishing attacks.
Training the AIs
At KnowBe4, we are in the business of decreasing human risk. And now that mission includes addressing the risk of the AI agents that users use. We used to protect and train humans only, and now we are also training the AI agents that users use. We are heads down, working hard to develop more and more solutions that focus on training and guiding AI agents.
Here is KnowBe4’s CEO, Bryan Palma, discussing how we will train AI agents.
It used to be human versus human.
Now it’s quickly evolving to AI versus AI, and may the best AI/human combination win. Human involvement in cybersecurity will not go away completely, but AI will take over more and more tasks that humans used to perform.
But the main takeaway of this article is that the maturation of AI and how hackers are using it is evolving very quickly. Next year’s world will be different from all the previous years. Make sure you are prepared.
