Long-time followers of mine know that I am not an AI hype person. Some people might even call me an AI critic. I prefer to call myself an AI realist. I do not think AI will kill us all (despite our best efforts to bypass all guardrails and common sense). I do not think AI will replace all jobs. I do not think AI will replace all cybersecurity jobs.
But I do think AI allows improvements in many areas, including cyber defenses, over traditional tools and techniques. And I do think that we will need plenty of AI-enabled cyber defenses to defeat AI-enabled threats.
Article Summary
- AI does many things more efficiently or better than humans and/or traditional cyber defense tools
- There are many traits that make AI better and more efficient at particular types of tasks
- Those traits will make AI-enabled cyber attacks more successful
- You will need those same AI traits to fight AI-enabled attacks
2026 is the year hacking becomes mostly an AI-enabled endeavor for attackers and their malware programs. Attacks and scams using AI are already more successful and steal more value per attack or scam. Chainalysis stated that scams that used AI stole 4.5 times more value than scams that did not use AI. That one fact alone means that what most hackers and scams use will be AI-enabled. Most phishing toolkits already use AI. Most hacking is already on its way to using AI this year. The future of hacking is set – and it is AI-enabled.
We do not even have to wait for the future. It is here already. Google and other major vendors are reporting increased AI use in hacking. Here is Google’s most recent blog on AI-enabled attacks. It is very difficult to read that report and not see where all hacking is heading.
When your kids hear the word ‘hacker’, they will not think of a human in a hoodie hunched over a laptop drinking Jolt cola. They will think of AI, because that is what it will mostly be.
And…for sure…you will need AI to best and most efficiently defeat AI.
In this post, I will not be covering the attacks AI can be involved in, but I do cover them in my latest book, How AI and Quantum Impact Cyber Threats and Defenses.
I will say that there are dozens and dozens of attack types. Dozens of attacks are accomplished using AI and dozens of attacks against the AI tools you use. There is a difference, and you have to be prepared to defend against both categories of attacks.
So, why will we need AI-enabled cybersecurity defense tools to fight AI attacks?
Because AI does a lot of things better than humans and/or traditional software tools.
This is what AI does better:
- Increased efficiency and automation
- Faster solutions
- New features
- New and better insights
- Increased personalization
- Better predictions
First and foremost, AI is increasing efficiency and automation. It is taking things we used to do or involve more human effort and automating either all or part of it. For example, finding cybersecurity vulnerabilities and patching them can take up a lot of a defender’s time. With AI, you can send the AI agent off to find vulnerabilities in your environment, even previously undiscovered and unknown/undisclosed vulnerabilities (i.e., zero days), and it will not only find them, but fix them. This is already being done by some organizations using AI, but it will become the default model for all organizations and people in the near future.
AI allows for faster solutions in many scenarios. A great example of that is protein folding. All life is made of proteins, which are often very complex chains of amino acids. All the things that help life to thrive and threaten it are proteins. Many of the vaccines and medicines that defeat harmful things are able to work because they are opposite-shaped proteins that perfectly “match” the harmful protein’s structure. Think about how puzzle pieces work.
One of the Holy Grails of modern medicine is figuring out how all these harmful proteins “fold.” Protein chains fold right, left, up, down, and away from themselves, making complicated shapes like a toddler putting together Lego™ blocks. For decades, researchers and scientists have been trying to figure out how all proteins fold. If we figured out how all proteins fold, it would be one great step toward mitigating all of the harmful proteins. And for decades, the process of finding out “protein folding” was a very slow process. After decades of trying, we still had many decades or even centuries to go before figuring them all out…before AI.
Then, in 2020, AI company DeepMind created an AI-enabled product called AlphaFold, which figured out most of the protein folding shapes very quickly. And suddenly, a problem that was expected to take decades to centuries longer to solve was suddenly solved in a very short period of time. Figuring out all of the ways that proteins fold does not immediately solve all diseases and sicknesses, but it is a necessary first step on the way to that goal, and now it is done. AI cannot do everything remarkably faster than a traditional computer, but it can solve some types of problems, especially those involving pattern-matching, a lot faster.
AI is giving us new products and features, and significant enhancements of existing products and features. It is hard to think of a product or feature that cannot be enhanced by using AI in it. Although again, AI cannot enable or make all products and scenarios faster, as previously discussed above. For example, if you want to do pure computations, a traditional computer is better at math. If you want to solve large prime number equations, a quantum computer is better (covered in the chapters on quantum). But in many scenarios, AI is giving us new features and products, and better versions of existing features and products. Much of what we do on a daily basis is already being improved using AI.
AI is going to allow both defenders and attackers to find more serious vulnerabilities faster. Defenders will need to use AI to find and close those vulnerabilities before the attackers do. AI will be used to conduct better security awareness training; training that will be more targeted and successful.
AI can more easily learn from its mistakes, morph, try something new, and see if the new thing is better than the old thing. AI-enabled malware, like PROMPTFLUX, is starting to evolve itself, on-the-fly, to better escape detection and increase its chances of success.
It is expected that AI-enabled threats will soon routinely use A/B testing. A/B testing is traditionally done by marketing teams when they have something they want to sell, but are not sure which marketing message among multiple possible choices is the best message to use to most successfully sell the product. When that happens, marketing will often put out both messages to different smaller “test” groups of potential customers to see which message results in more sales and interaction. When they see which message does better, the marketing campaign starts only using that message, at least until that message starts to be less successful. Then, perhaps they move over to the other message or create other messages and A/B test them.
We are already seeing the rudimentary beginnings of AI-malware preparing to use A/B testing to see what messages and traits end up allowing more success. If a particular scam message tricks more people or a particular message or malware traits bypass more malware detectors, the AI involved with the malware’s creation will switch to using the more successful message. And it does this without the input of the hacker. The AI will do it on its own. This is not happening yet, but we can see the coding morphing there pretty quickly. It is just a matter of time.
AI is giving us increased personalization, something we are calling hyper-personalization. Computers and our programs have always been tracking us and often trying to give us solutions and answers based on our tracked behaviors. I think we have all searched for something on the Internet (say, a new grill), and for weeks thereafter, any Internet search shows us incessant ads for grills.
AI is going to allow not only more focused ads, but better “know” you and your personal preferences. When buying that grill, AI will have a better chance of knowing if you are the type of person who wants to spend over $1000 for that grill, what type of grill you prefer, and whether you are fine with assembling that grill or prefer it to be pre-assembled. If you are traveling, AI will better know the type of flights you prefer (e.g., early morning or late afternoon, window or aisle, airline, etc.), what types of hotels you prefer (e.g., Marriotts or ones near big gyms), and whether you like to fly after or on the last day of business or if you prefer to travel the next day.
AI will allow attackers to create hyper-personalized social engineering messages crafted just for us. The AI, when trying to break into an organization, will search on all the available information about that organization and its employees. It will find out what each employee does at the organization, what activities they are involved in, where they live, what their hobbies are, and so on. And then it will send a hyper-personalized phishing message that will be far more likely to be successful.
AI will give us better insights. Better pattern-matching means AI can identify things that we cannot see as easily. There is a good chance that AI will discover new causes of cancer and other diseases, things that were “hiding” in plain sight before. There are already many anecdotal stories of AI finding things that dozens of doctors failed to see, and examples will only increase over time as AI gets better and is used more.
Lastly, AI will enable us to make better predictions. Anything that relies on historic data pattern-matching, like weather prediction, stock market conditions, or self-driving cars, will be better with AI. AI can see patterns we cannot see and put them together to produce better, faster results, just like it did for protein folding.
That better pattern matching and prediction will help attackers be better attackers and help defenders be better defenders. AI cybersecurity defenses will see patterns that humans simply can’t see as easily, and allow the AI-enabled defenses to spot and mitigate them.
It is important to recognize the inherent abilities of AI and how it works to allow for all these benefits. But AI will give benefits to attackers, too. And you will need AI-enabled cyber defense tools to defend against these AI-enabled attacks. If you try to use non-AI cyber defense tools alone to defend against AI-enabled attacks, you will lose.
Why?
Because of the inherent benefits of AI.
Again, I am not trying to be an AI hype person. I am a big believer that people should care about features more than something simply containing or using AI. When a vendor tells me they use AI, I say, “So what?” I want to learn what the AI-enabled feature or product can do that it could not do before, irrespective of whether that new feature or product uses AI. I do not care if that new feature or product comes to me using AI or uses older traditional IF-THEN programming methods. But what I am saying is that the inherent ways that AI works make it easier for developers to create new features and products to do many more amazing things than old traditional programming.
I like to take a moderate approach. I know AI cannot just magically do everything better. There are things being done by AI today that could have been done by traditional programming methods. There are things that AI cannot do well at all, such as guessing at truly random passwords. But if you meet people who think that AI cannot do anything better than traditional computing, they are wrong as well. The inherent way that AI works makes it better for many things.
And AI is going to allow hackers and their malware creations to be better at harming people and organizations. It already is. And you are absolutely going to need to incorporate AI-enabled cyber defense tools to respond to AI-enabled threats. Because the AI-enabled attacks are going to be faster, more pervasive, and morphing faster than traditional defenses can counter.
Do you really need AI to defeat AI?
No, but to defeat it best and most efficiently? Yes.
KnowBe4 has been using machine learning and AI for over 10 years. And our data shows us that our AI-enabled tools and defenses defend our customers better, more efficiently, and reduce cybersecurity risk better than our traditional tools. This is not hype or salesmanship. This is what we are seeing based on the data.
And for that reason, we are developing new AI agents and capabilities as fast as our AI-enabled programmers can code them. Watch this space over the next few months and years and you will see a slew of new AI agents and tools to better protect the human and the AI agents they use. KnowBe4 is changing quickly to meet the needs of today’s AI-enabled attackers. We already have. We will be doing it more.
