Artificial intelligence continues reshaping the cybersecurity landscape, and many security professionals now believe it is also helping create a more capable generation of cybercriminals.
We recently surveyed thousands of subscribers to the Cybersecurity Insider newsletter and asked a simple but important question: Is AI creating a new generation of skilled threat actors?
Key Takeaways of the Survey
- Nearly half of cybersecurity professionals surveyed (47.1%) believe AI is helping create a more capable generation of cybercriminals.
- Another 29.4% said AI is lowering the barrier to entry, allowing less experienced attackers to launch sophisticated campaigns.
- Threat actors are increasingly using AI for phishing emails, building malware, malware obfuscation, reconnaissance, and social engineering.
- Security teams are already seeing AI-generated phishing lures and AI-assisted attack techniques become more scalable and convincing.
- Despite AI’s growing role, many professionals still believe successful cybercriminal operations require human expertise, good operational security (OPSEC), and strategic planning.
The results of the survey reveal growing concern across the cybersecurity community.
Nearly half of respondents (47.1%) said yes, believing AI is actively helping threat actors improve their technical skills.
Another 29.4% said AI is primarily lowering the barrier to entry, making it easier for less experienced attackers to launch sophisticated campaigns.
Only 23.5% believed real technical expertise is still required and that AI alone is not enough to create capable adversaries.
How AI Is Changing Cybercriminal Operations
The findings reflect a broader shift happening across cybercrime ecosystems.
Historically, launching advanced attacks often required deep technical expertise in malware development, scripting, exploit chains, infrastructure management, or social engineering.
Today, generative AI tools are increasingly automating portions of those workflows.
Attackers can now use AI to generate phishing emails, create malicious scripts, improve malware obfuscation, summarize stolen data, translate attacks into multiple languages, and even troubleshoot coding errors in real time.
For many defenders, the concern is not necessarily that AI is creating elite hackers overnight, but rather that it is accelerating skill development and operational efficiency for existing cybercriminals.
AI tools can help inexperienced threat actors perform tasks that previously required mentorship, underground resources, or years of hands-on self learning.
This shift may also contribute to the rise of more scalable phishing campaigns, credential theft operations, and business email compromise attacks.
AI Is Lowering the Barrier to Entry for Attackers
The survey results also highlight growing anxiety around accessibility.
Nearly one-third of respondents viewed AI primarily as a force that lowers the barrier to entry rather than replacing technical expertise entirely.
Ransomware affiliates, phishing operators, and initial access brokers increasingly rely on automation, stolen credentials, and social engineering rather than advanced exploits they built themselves.
Security Teams Are Already Seeing AI-Driven Threats
Security teams are already seeing evidence of this evolution.
AI-generated phishing lures are becoming more personalized and grammatically polished, making them harder for employees to detect.
Threat actors are also experimenting with AI-assisted reconnaissance, fake personas, deepfake audio, and automated vulnerability research.
While many AI-generated attack techniques still require human oversight, the technology is helping attackers move faster and operate at greater scale.
Human Expertise Still Matters in Cybercrime
At the same time, the survey responses suggest many cybersecurity professionals still believe foundational technical skills remain essential.
The 23.5% who rejected the idea that AI alone creates skilled threat actors likely recognize that successful cyber operations still depend on good operational security, persistence techniques, infrastructure management, and real-world experience.
I think AI may assist attackers, but it does not fully replace expertise, creativity, or strategic planning.
AI as a Force Multiplier for Modern Cybercrime
Still, the overall trend is clear: defenders increasingly view AI as a force multiplier for cybercrime.
As organizations continue integrating AI into enterprise environments, the same technology driving productivity and automation may also increase adversarial capabilities.
This creates new pressure for security teams to improve visibility, strengthen identity protections, and invest in behavioral detection rather than relying solely on traditional indicators of compromise.
The survey results ultimately reflect a cybersecurity industry adapting to a rapidly changing threat landscape.
Whether AI is creating “skilled” threat actors or simply empowering less experienced ones, many security professionals agree the technology is reshaping how attacks are developed, scaled, and executed.
