
Compliance and governance
The Wiz findings highlight how exposed API keys can escalate into full-scale compromises across AI ecosystems, according to Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services. “Stolen credentials can be used to manipulate model behavior or extract training data, undermining trust in deployed systems.”
Grover noted that such exposures are often linked to the way AI development environments operate. “AI projects often operate in loosely governed, experimentation-driven environments where notebooks, pre-trained models, and repositories are shared frequently, leaving secrets unscanned or unrotated,” Grover added.
She pointed to data from IDC’s Asia/Pacific Security Study, which showed that 50% of enterprises in APAC alone plan to invest in API security when selecting CNAPP vendors, reflecting how exposed APIs have become a major attack vector.
With regulators sharpening their focus on AI safety and data protection, secret management and API governance are likely to become auditable elements of emerging AI compliance frameworks, Grover said.
