editorially independent. We may make money when you click on links
to our partners.
Learn More
Roughly two decades ago, organizational leaders began asking many questions about a watershed technology migration in the making:
- Should we move our data to the cloud?
- How much should we commit to the cloud?
- Could our employees use the cloud without IT’s approval?
From Cloud Migration to AI Migration
Today, another massive migration is underway, this time focused on generative artificial intelligence (genAI) and other AI tools and platforms.
IT and security departments already fought the good fight to safely migrate data to the cloud.
Now, they have to do it all over again for AI, with similar questions looming large:
- Should we move our data into this AI application?
- How much should we use AI?
- How often are our employees using AI without approval?
What’s more, this may amount to a far trickier transformation.
The Rise of Shadow AI and Data Visibility Gaps
Companies are seeking to go beyond popular AI apps by creating their own apps in — house, and they’re prioritizing the rapid development and deployment of the apps.
But as they ramp up this development — profoundly shifting where their data resides — they’re not focusing enough on data visibility.
At the same time, employee-users — the same ones who are now perfectly comfortable and capable with the cloud — view AI apps as just another cloud tool.
Readily available access, ease of prompts and overall intrigue with AI experimentation are boosting the odds that they’re often inadvertently but precariously moving sensitive data into AI apps and agents (e.g., shadow AI).
GenAI Adoption Is Surging — and So Are the Risks
In recently published research, we found that genAI adoption is surging — and raising risks:
- The total number of people using any SaaS genAI applications is growing exponentially, tripling over the past year in the average organization.
- The amount of data being sent to SaaS genAI apps grew sixfold, from 3,000 to 18,000 prompts per month.
- Shadow AI remains a significant challenge, with 47 percent of genAI users using personal, unmanaged AI apps.
- In the average organization, both the number of users committing data policy violations and the number of data policy incidents have increased twofold over the past year, with an average of three percent of genAI users committing an average of 223 genAI data policy violations per month.
Why CISOs Face a Growing Risk Equation
The result: The modern enterprise faces a recurring Achilles heel as CISOs and their teams attempt to protect digital assets in transformational times.
As a technology, AI is evolving rapidly — too swiftly for cyber defense protocols to keep up. This is leaving organizations susceptible to the distribution (whether approved or not) and potential exposure of sensitive data through AI tools.
Best Practices for Securing Enterprise GenAI
To effectively manage the rising risk equation, CISOs and their teams should consider the following best practices.
Assess Your GenAI Landscape
Identify the genAI SaaS apps, platforms and locally hosted tools which reside throughout your organization, and who is using them.
Determine how your organization deploys them within different departments and workflows.
Bolster AI App Controls
To reduce or eliminate shadow AI, enforce policies which ban anything but company approved genAI apps.
Block the unapproved ones while establishing policies to prevent the unauthorized sharing of sensitive data with these tools.
Establish Guidelines
Train employees on how to properly work with approved solutions and educate them about safe genAI practices. Follow frameworks and industry standards.
Apply Relevant Frameworks
Routinely benchmark your controls against what the rest of your industry is doing. Leverage frameworks like OWASP Top 10 for LLMs and NIST RMF for AI.
Keep up-to-date on new developments in AI ethics, regulatory changes and attack trends — and adapt your security policies and practices accordingly.
Applying Lessons from the Cloud Era
We should never forget the lessons learned from the global cloud migration: Corporate leaders harbored certain reservations about the cloud and security.
But with their collective vigilance, these leaders now perceive the cloud as a relatively fortified entity — like a highly patrolled and well-guarded offsite company location.
At the present, AI is more like a new, under-vetted contractor who is immediately given unchecked access to a broad range of data.
By implementing a structured genAI governance plan — including landscape assessments, app controls and industry-aligned frameworks — security teams can transform AI from an under-vetted contractor into a productive, well-monitored contributor to organizational success.
To achieve that level of control and visibility, organizations should leverage zero-trust solutions that help continuously verify access, limit implicit trust and protect sensitive data at every interaction point.
