
I recently gave a presentation at SecTor on proactive threat hunting, which sparked some meaty conversations afterward on the show floor. On the expo floor, surrounded by “AI-first” security vendors, the CISOs and threat hunters I spoke with were worried. They’re worried because AI can elevate script kiddies into elite hackers with advanced capabilities and legions of adversarial AI bots and we’re not prepared for that — at least, not yet.
While there’s no doubt AI holds great potential for cybersecurity, in practice, it’s mainly being used to automate what we’re already doing. For companies to stand a chance, we need new approaches to AI-powered defense, not optimized ones.
The asymmetry problem
Attackers already have systemic advantages that AI amplifies dramatically. While there are some great examples of how AI can be used for defense, these methods, if used against us, could be devastating. For example, XBOW is an autonomous pen-testing bot, created by a startup of the same name. It’s a security product and an impressive one at that. This summer, for the first time in bug bounty history, XBOW’s autonomous penetration tester reached the top spot on the HackerOne leaderboard for several months running.
