
Attacks against AI systems and infrastructure are beginning to take shape in real-world instances, and security experts expect the number of these attack types will rise in coming years. In a rush to realize the benefits of AI, most organizations have played it fast and loose on security hardening when rolling out AI tools and use cases. As a result, experts also warn that many organizations aren’t prepared to detect, deflect, or respond to such attacks.
“Most are aware of the possibility of such attacks, but I don’t think a lot of people are fully aware of how to properly mitigate the risk,” says John Licato, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing at the University of South Florida, founder and director of the Advancing Machine and Human Reasoning Lab, and owner of startup company Actualization.AI.
Top threats to AI systems
Multiple attack types against AI systems are arising. Some attacks, such as data poisoning, occur during training. Others, such as adversarial inputs, happen during inference. Still others, such as model theft, occur during deployment.
