Dive Brief:
- The AI era is transforming what CISOs do and how they do it, the enterprise software firm Splunk said in a report published on Tuesday.
- Nearly all CISOs have been assigned to manage their organizations’ AI governance responsibilities, the report found, a significant expansion of “their already overwhelming mandates.”
- CISOs interviewed in the report expressed both an awareness that they needed to use AI and a range of concerns about its potential harms.
Dive Insight:
CISOs are feeling increasing pressure to integrate AI into their workflows, Splunk found, particularly as threat actors use the technology more often and in more potent ways.
“If your security function isn’t using AI, it’s like taking a knife to a gun fight,” Mike Salem, CISO of communications infrastructure operator IHS Towers, said in the report. “For CISOs, that can be a tough pill to swallow.”
Despite CISOs’ drive to adopt AI — more than two-thirds of them said investing in AI-driven cybersecurity capabilities was a very important or the most important priority — many of those who have already presided over significant AI deployments report only mixed results.
For example, just 39% of the CISOs who have partially or fully adopted agentic AI “strongly agree it has increased their team’s reporting speed,” Splunk said in its report. Nearly two-thirds of CISOs overall disagreed with the statement “agentic AI will replace some level 1 security team functions.”
Still, Splunk found that CISOs were bullish on agentic AI’s potential in several key areas of their portfolio, including speeding up data analysis (82% somewhat or strongly agreed that it would help in this area), mitigating workforce gaps (63%) and making human analysts more accurate (62%).
But agentic capabilities also introduce acute risks, and 83% of CISOs said they were most concerned about the impacts of AI model hallucinations. A lack of human oversight ranked second among their concerns, followed by potential legal liability associated with agents’ actions.
Splunk’s report is based on a survey of 650 CISOs taken in July and August 2025. Respondents represented nine industries, including manufacturing, telecommunications, energy and transportation, in nine countries, including the U.S., France, Germany and Singapore.
Beyond agentic AI specifically, data leaks and shadow AI represent major concerns for CISOs. More than three-quarters of CISOs said data leaks were their top concern. And CISOs already using generative AI were more worried about shadow AI (90% cited it as a top-three concern) than those who aren’t already using it (79%).
“Shadow AI presents a direct challenge to governance, control, and the integrity of security operations — in other words, everything the CISO strives to protect,” Splunk said in its report, adding, “CISOs who are further along in their generative AI journey are starting to notice some trade-offs.”
At a high level, CISOs are primarily concerned about threat actors’ increasing sophistication (95% identified this as a major challenge), the pace of technological evolution (89%), changes in the regulatory environment (76%), workforce shortages (47%) and budgets (42%),
Few security executives believe that AI will solve their workforce problems. Only 1% of respondents said they considered new technology their best solution to skills gaps. Instead, CISOs are prioritizing upskilling and hiring. Many, however, are pessimistic about filling all of their open roles — only 16% said they expected to do so, while 79% expected many to remain vacant.
Splunk offered five recommendations for CISOs in the AI-infused era: prioritize translating their work into a language the rest of the business could understand; refocus on quality, not quantity, of work to address burnout; collaborate with other business leaders to “embed security into business strategy”; pair human intuition with machine automation; and emphasize clear AI governance to protect the technology they will inevitably have to adopt.
