Google Threat Intelligence Group recently released its latest report, “GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Us,” on how malicious adversaries are using AI to commit cybercrimes.
Google’s size, central axis, threat intelligence, and assessment make their report among the best out there. You can rely on what they are seeing and concluding as a very good proxy for state-of-the-art AI-enabled cybercrime, especially at scale and committed by nation-states. There may be a few pockets of smaller groups of cyber criminals or individuals using AI in other, more advanced ways, but Google is telling you what is happening broadly in the real world. It is what most of us have to worry about. How nation-states are using AI maliciously is a canary in the coal mine for the rest of us.
Google is seeing increasing use and sophistication of AI by our adversaries over time, including in social engineering.
Selected takeaways: Findings Related to Social Engineering
- (LLMs) have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures
- AI is used to create hyper-personalized phishing messages
- Attackers are using AI to create “rapport building phishing
- AI is used to research potential targets, including using AI to do OSINT research
- AI was used to target and research specific individuals
Other findings:
- AI-generated malware is becoming more common
- AI is used to write malicious code and scripts
- AI is used to research known vulnerabilities
- Intellectual property theft appears to be a big motivator, and is usually accomplished against private users of AI (versus against large-scale AI)
- Adversaries are creating or using services that “jailbreak” legitimate AI, MCP components, and other legitimate AI APIs, so that they can be used for malicious purposes
- Google detected attacks against their public AI where the attackers were trying to better understand Google AI’s logic and reasoning (Google calls this ‘model extraction’ and ‘distillation attacks’)
Possibly the only good note was that Google has not seen an attack that fundamentally changes the threatscape (i.e., AI is being used to do traditional attacks). AI is just being used to do it more pervasively, more personalized, and with fewer mistakes.
According to a recent Chainalysis report, AI-enabled cybercrimes were able to steal 4.5X more value. Let that sink in!
That AI is being used to do more hyper-personalized social engineering with more accuracy and fewer mistakes is no surprise. We know that phishing lures that specifically target particular groups or people are far more likely to be successful. In fact, spearphishing emails are involved in the vast majority of successful attacks and s AI-enabled spearphishing messages will only increase that success rate.
Make sure to update your security awareness training with these lessons. Users can no longer rely on generic, generalized phishing messages as an indicator of a scam. Today’s AI-enabled scams are going to know your organization, know the potential victim, and try to build a rapport with them.
What Google Didn’t Report
What Google did not report is as important as what it did see. Google is not yet talking about autonomous AI agents and fully automated hackbots running around hacking people and organizations. All of these I predicted in my most recent book, How AI and Quantum Impact Cyber Threats and Defenses , as becoming the norm in 2026. But at least at the beginning of 2026, Google is not seeing it. Although I have a feeling that 2026 will seem like multiple years before we get to the New Year.
If autonomous hacking bots do not explode in 2026, as I predict, they will be here sooner rather than later. One day, Google’s report will start to mention AI-enabled autonomous hacking bots more and more, until they just become the majority of what Google is reporting on.
Ultimately, what it means is that you need to prepare, both with your education and your defensive tools. You will likely need AI-enabled cybersecurity tools to defeat AI-enabled attacks. Why? Because AI-enabled things are going to be more successful at what they do than traditionally coded programs.
The world of AI versus AI is not coming. It is here.
The cybersecurity industry is not sitting back waiting for the bad guys to use AI without an offsetting AI-enabled defense.
KnowBe4 has had AI agents for the last 10 years and we are developing dozens of other agents to help protect you and the AI agents you use. We just released our best-in-the-industry AI-enabled deepfake training content creator. It allows anyone to upload a video of their boss or co-worker and very quickly create a deepfake video that can then be sent out to other co-workers as part of a simulated phishing campaign. You can find out how many co-workers would have been fooled by the deepfake content and easily give them additional training.
KnowBe4 Trains Humans and Agents
KnowBe4 has seven AI agents in market already, with dozens more on the way. We have always been the leader for training humans, and now we will train your AI agents. In order to keep the humans safe, we are going to need to keep the agents you use safe.
This layered approach provides resilience that no other platform currently matches and includes:
- Agent-Safe Behavior Training: Just as employees learned to spot a malicious link, they must now learn how to safely interact with and oversee AI agents.
- Prompt Injection & Manipulation Defense: Simulated attacks train global workforces to identify and resist adversarial inputs designed to hijack enterprise AI agents.
- Risk Scoring for Agent Interactions: Extending the industry-leading Risk Score to measure susceptibility to agent misuse provides comprehensive risk quantification.
As cyber threats increasingly become more AI-enabled, so too will your defenses. KnowBe4 has always been on the cutting edge of technology, and has the AI agents you need to keep your workforce safe.
