
Grover said organizations should assume prompt injection attacks will occur and focus on limiting the potential blast radius rather than trying to eliminate the risk altogether. She said this requires enforcing least privilege for AI systems, tightly scoping tool permissions, restricting default data access, and validating every AI-initiated action against business rules and sensitivity policies.
“The goal is not to make the model immune to language, because no model is, but to ensure that even if it is manipulated, it cannot quietly access more data than it should or exfiltrate information through secondary channels,” Grover added.
Varkey said security leaders should also rethink how they position AI copilots within their environments, warning against treating them like simple search tools. “Apply Zero Trust principles with strong guardrails: limit data access to least privilege, ensure untrusted content can’t become trusted instruction, and require approvals for high-risk actions such as sharing, sending, or writing back into business systems,” he added.
