
According to Rob Lee, chief AI officer at the SANS Institute, the problem of AI misuse can’t be solved by one company on its own – not even the mighty OpenAI. “Companies are pushing models that can autonomously discover or weaponize vulnerabilities, but the global safety ecosystem — governments, frontier labs, researchers, and standards bodies — is fragmented and uncoordinated,” said Lee.
“The result is a widening gap where speed becomes its own vulnerability, creating conditions for cascading failures across infrastructure, finance, healthcare, and critical systems.”
Not all experts are this pessimistic. According to Allan Liska, threat intelligence analyst at Recorded Future, it is important not to exaggerate the threat posed by AI. “While we have reported an uptick in interest and capabilities of both nation-state and cybercriminal threat actors when it comes to AI usage, these threats do not exceed the ability of organizations following best security practices,” said Liska.
