AI Didn’t Break Cybersecurity, It Revealed It
For years, cybersecurity hasn’t failed. It has succeeded at solving the wrong problem.
We built programs to detect attackers, respond to alerts, and report activity. We invested in visibility: more logs, more tools, more dashboards, believing that seeing more would mean securing more.
Over time, failure became normal. Breaches were expected. Incidents were routine. (Cremer et al., 2022) “Advanced threats” became the excuse. Then AI arrived, removing those excuses. AI didn’t break cybersecurity. It exposed what was already there.
Suddenly, executives are demanding answers, and boards want briefings. Policies are being written rapidly (The impact of AI on cybersecurity, 2024). New categories of ‘AI risk’ are emerging: vendors, frameworks, advisory services. The current state of AI risk management hinges on two key points: first, it is still nascent and has not created a new class of risk; second, major shortcomings undermine its effectiveness. These are: (1) no standardized assessment methods, so organizations cannot evaluate or compare risks consistently; (2) risk management is siloed, fragmenting oversight and weakening accountability; (3) oversight roles are unclear, leading to confusion about responsibility when AI systems go wrong; and (4) limited tools for real-time detection and intervention hamper control over improper data use. These gaps push organizations into a reactive stance, emphasizing showy action over strong, foundational governance. As a result, policies often remain theoretical, with little to ensure practical enforcement. (AI Risk Management in 2026: AI Moves into Production, 2026)
It exposed a longstanding issue that had previously been neglected.
Industry Optimization Focused on the Wrong Outcomes
Modern cybersecurity is not broken; it works according to its original design. The real problem: current programs optimize outcomes that ignore the most critical risks.
Over the past two decades, the industry has made a series of subtle yet cumulative tradeoffs:
- Detection was emphasized over prevention.
- Visibility was valued over control.
- Activity was favored over measurable risk reduction.
Systems now excel at detecting failure: they identify suspicious activity, pinpoint its location, and track recurring events. Their alerts, tickets, and reports display effort and responsiveness but not prevention.
A persistent challenge remains in preventing recurring failure modes. For example, incidents involving credential theft or repeated exploitation of unpatched systems continue to occur despite previous breaches of a similar nature.
Credential theft, unpatched systems, and misconfigurations consistently expose organizations to risk. (Chen et al., 2026)
Misconfigurations continue to expose data. (Research, 2025)
Flat networks continue to enable lateral movement. (Powell, 2019, pp. 67-105)
These are not new attack methods but recurring, preventable conditions. Instead of eliminating root causes, the industry focuses on detecting them more efficiently.
In many settings, security teams are judged by response speed and thoroughness, rather than by risk reduction. This activity fuels a cycle of action without progress. New, meaningful metrics are needed: fewer persistent vulnerabilities, near-miss prevention, real-time compliance, shorter patch cycles, and eliminated lateral movement. Tracking these would clarify actual progress.
Systems were not designed to prevent failure, but rather to manage it.
AI Is an Accelerator, Not an Origin
Into this environment, AI was introduced…not as a disruptor of fundamentals, but as an amplifier of reality.
Large language models, automation, and AI tools do not create new access to data. They use what exists, following current paths and working at human-surpassing speeds.
These characteristics contribute to AI’s power and its capacity to reveal underlying systemic issues.
When sensitive data enters a public model, new risks can arise, such as the emergence of novel extraction or exploitation methods. Yet these stem from poor initial controls. AI may intensify exposure, but the main issue is the lack of safeguards before AI’s use.
When an AI-enabled workflow accesses systems across multiple SaaS platforms, the risk is not that AI connected them. The risk is that those connections were already possible, undocumented, and largely uncontrolled.
When “shadow AI” appears across an organization, it is not introducing a new behavior pattern. It is accelerating an old one: employees adopting tools that help them move faster than the controls designed to manage them.
AI did not break your controls. It operated exactly within the gaps you already had.
The Problem We Avoided: Data Movement
If there is a single thread that connects most modern security failures, it is not a lack of tools or even a lack of visibility.
It is a lack of understanding and control over how data actually moves.
Data today does not live in a single place. It moves continuously:
- Between SaaS applications
- Across APIs and integrations
- Through endpoints and browsers
- Between users, systems, and third-party services
Many organizations lack a full map of data flows. Ownership is split, responsibility is vague, and controls are uneven, often tied to storage instead of usage.
Tools like DLP, access controls, and segmentation, which are fitted better to defined boundaries, struggle as data movement becomes dynamic and user-driven.
Most organizations believe they control data, but often they do not. AI exposes—not creates—data exposure by using established pathways more efficiently. (Building private AI: control, compliance and competitive edge, 2026)
Organizations don’t lose control of data because of AI. They lose control because they never had control to begin with.
From Slow Leakage to Instant Exposure
Before AI, many of these gaps manifested as slow, often unnoticed leakage. (Uliss, 2024)
A file shared too broadly.
An API with excessive permissions.
An employee is using an unsanctioned tool to move faster.
These issues mattered, but human speed limited their impact. Data moved more slowly, mistakes spread gradually, and consequences took time to build.
AI eliminates this limitation.
What was once a manual process becomes automated.
What was once isolated becomes systemic.
What was once manual and isolated is now immediate and systemic.
A single prompt can now aggregate, transform, and expose data from multiple sources in seconds. (Data Integration Tools in 2026: Types, Functions and Benefits, 2026) Automated workflows, enabled by agentic AI systems acting as a digital workforce, can now autonomously replicate and distribute information at scale and execute decision-making processes continuously within departments (Sukharevsky et al., 2025) (The pilot phase is over. Here’s what’s next for enterprise AI automation, 2026).
AI did not increase risk in a linear fashion. It multiplied the consequences of existing conditions.
And because those conditions were never fully addressed, the multiplication effect is being felt across the entire environment at once.
The Comfort of Mislabeling the Problem
Faced with this shift, the industry has responded predictably: by creating a new category.
“AI risk.”
It is a convenient label. It creates a sense of novelty, urgency, and separation from existing problems. It enables organizations to form task forces, draft policies, and evaluate new tools without necessarily confronting the essential issues. But this framing comes at a cost.
Labeling these as “AI risk” suggests the threat comes mainly from the technology, overlooking prior decisions that caused current exposures. This shifts focus to new tools instead of addressing foundational security gaps.
In reality, the observed phenomena do not constitute a new risk category.
It is the acceleration of existing ones:
- Weak data governance
- Insufficient understanding of data flows.
- Excessive dependence on detection over control
- Misalignment between business velocity and security enforcement
Labeling these challenges as “AI risk” enables organizations to address only the symptoms, thereby avoiding accountability for the underlying systems that produced them (Saeri et al., 2025).
The Incentive Problem No One Mentions
There is another layer to this dynamic, one that is less technical and more structural.
According to a report from Futurum Research, cybersecurity buyers tend to value solutions that integrate smoothly with their existing systems over those that necessarily reduce complexity or save costs. This creates an environment where vendors are incentivized to focus on integration and operational efficiency rather than eliminating risk entirely.
Frameworks, together with audits, are incentivized to standardize assessment, not to guarantee effectiveness. Organizations are incentivized to demonstrate compliance and activity, not necessarily outcomes. (Park & Hastings, 2025)
Within this system, solving root problems; simplifying architecture, reducing tool sprawl, enforcing consistent controls; can be more disruptive than advantageous in the short term. It questions existing investments, processes, and metrics. (Dinha, 2026) (Security tool bloat is the new breach vector, 2025)
AI breaks this equilibrium.
By accelerating the consequences of existing weaknesses, AI renders these vulnerabilities more difficult to ignore and increasingly costly to sustain. This dynamic compels a confrontation between reported outcomes and actual conditions. (Accenture, n.d.)
And that tension is uncomfortable.
What Actually Needs to Change
If AI isn’t the root issue, the solution isn’t just AI-specific controls. What is required is a change in how security is designed and measured.
- From Assets to Data Flows
Organizations must move beyond unchanging inventories of systems and begin mapping how data actually moves between them. This includes understanding not only where data resides but also how it is accessed, transformed, and shared in practice. As an actionable first step, organizations should conduct an initial inventory of data sources and destinations across major business applications, followed immediately by deploying automated discovery tools to trace data flows between these points. These activities can be complemented by convening collaborative data flow mapping workshops with key stakeholders from IT, security, and business units. Leveraging existing data protection platforms that visualize sensitive data movement, or using network analysis and API monitoring, can help teams build an actionable picture of real-world data use. Even simple process-mapping exercises or reviewing SaaS integrations can quickly reveal high-risk pathways that require targeted controls. (Data Flow Mapping, n.d.)
- From Visibility to Intervention
Detection stays important, but it is insufficient on its own. Controls must be capable of intervening at the point of use: within browsers, applications, and workflows, where decisions about data actually occur.
- From Policies to Enforcement
Documented intent is not the same as operational control. Policies must be translated into enforceable mechanisms that consistently govern behavior throughout environments.
- From Programs to Systems
Security programs regularly rely on processes and people that degrade under pressure. Security systems should be engineered with the same rigor as other critical systems designed for scale, resilience, and failure.
- From Reactive to Engineered Outcomes
Rather than responding to each new class of risk, organizations should focus on engineering environments that systematically reduce or eliminate classes of failure.
This is not a small shift. It requires rethinking architecture, redefining metrics, and, in many cases, challenging long-standing assumptions about how security is delivered. In the financial services sector, organizations are adopting comprehensive data lineage mapping, integrating automated controls for transactional data, and establishing cross-department incident analysis teams to identify and remediate persistent vulnerabilities beyond surface-level monitoring. For instance, several leading European banks have implemented continuous data flow monitoring and embedded security interventions directly into business processes, leading to measurable reductions in repeated incidents of sensitive data exposure through real-time anomaly detection and automated policy enforcement (European Banking Authority, 2023). In healthcare, some hospital systems have gone beyond traditional compliance by using machine learning–driven discovery tools to uncover unauthorized data pathways, while pairing these insights with automated enforcement technologies; such as adaptive access controls; to minimize policy violations specifically in patient data transfers and to ensure continuous tracking of data movement, even across cloud-based electronic health records (Jenkins et al., 2024). Critical infrastructure operators are piloting cross-platform monitoring of operational technology networks, integrating threat intelligence feeds into response playbooks, and developing sector-specific governance frameworks that reflect unique regulatory and safety requirements. While these efforts are still gaining traction, they offer sector-specific, evidence-based models for advancing foundational change. Ultimately, regardless of sector, this approach is also the only path that addresses the problem at its source. (Investing in intelligence: The impact of AI adoption and investment intensity on supply chain efficiency of green firms, 2026)
A Brief Instant of Clarity
AI did not emerge to disrupt cybersecurity, but rather to reveal its underlying flaws. It exposed undocumented data flows, inconsistent controls, and the assumption that visibility alone was sufficient.
For years, those gaps existed in relative obscurity; manageable, explainable, and often ignored. AI removed that obscurity. It made the consequences immediate, visible, and scalable.
Ultimately, the core insight is that AI has not diminished overall security but has instead intensified the visibility and impact of existing systemic vulnerabilities. While the foundational systems and their inherent weaknesses remain largely unchanged, AI compels organizations to confront these persistent issues by dramatically increasing the speed and scale at which their consequences manifest. (Malkawi & Alhajj, 2026)
For the first time, this reality has become impossible to ignore.
References
Benishti, E. (2025, September 3). Security tool bloat is the new breach vector. TechRadar Pro. https://www.techradar.com/pro/security-tool-bloat-is-the-new-breach-vector
Chen, Z., Zhang, Y., Liu, Y., Deng, G., Li, Y., Zhang, Y., Ning, J., Zhang, L. Y., Ma, L., & Li, Z. (2026). Credential leakage in LLM agent skills: A large-scale empirical study [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2604.03070
Conner, B. (2026, April 2). The pilot phase is over. Here’s what’s next for enterprise AI automation. TechRadar Pro. https://www.techradar.com/pro/the-pilot-phase-is-over-heres-whats-next-for-enterprise-ai-automation
Cremer, F., Sheehan, B., Fortmann, M., Kia, A. N., Mullins, M., Murphy, F., & Materne, S. (2022). Cyber risk and cybersecurity: A systematic review of data availability. The Geneva Papers on Risk and Insurance—Issues and Practice, 47. https://doi.org/10.1057/s41288-022-00266-6
Data flow mapping. (n.d.). UK GDPR. https://www.ukgdpr.org/services/data-flow-mapping
Dinha, F. (2026, April 6). Cybersecurity complexity is the new vulnerability. Forbes. https://www.forbes.com/councils/forbestechcouncil/2026/04/06/cybersecurity-complexity-is-the-new-vulnerability/
European Banking Authority. (2024, June 14). Annual report 2023. https://www.eba.europa.eu/publications-and-media/publications/annual-report-2023
IBM. (2026). Data integration tools in 2026: Types, functions and benefits. https://www.ibm.com/think/insights/data-integration-tools
Investing in intelligence: The impact of AI adoption and investment intensity on supply chain efficiency of green firms. (2026). Technological Forecasting and Social Change, 227, Article 124631. https://doi.org/10.1016/j.techfore.2026.124631
Malkawi, M., & Alhajj, R. (2026). AI-powered vulnerability detection and patch management in cybersecurity: A systematic review of techniques, challenges, and emerging trends. Machine Learning and Knowledge Extraction, 8(1), Article 19. https://doi.org/10.3390/make8010019
Montenegro, F. (2026, March 2). Futurum Intelligence Research reveals security operations teams are driving platform adoption to solve the “hybrid mess,” not just to cut budgets. The Futurum Group. https://futurumgroup.com/press-release/futurum-research-cybersecurity-buyers-prioritize-integration-over-cost-savings/
Powell, B. A. (2019). The epidemiology of lateral movement: Exposures and countermeasures with network contagion models. Journal of Cyber Security Technology, 4(2), 67–105. https://doi.org/10.1080/23742917.2019.1627702
Saeri, A. K., George, S. L., Graham, J., Lacarriere, C. D., Slattery, P., Noetel, M., & Thompson, N. (2025). Mapping AI risk mitigations: Evidence scan and preliminary AI risk mitigation taxonomy [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2512.11931
Schirmer, M. (2026, April 6). Building private AI: Control, compliance and competitive edge. TechRadar Pro. https://www.techradar.com/pro/building-private-ai-control-compliance-and-competitive-edge
Splunk. (2026). AI risk management in 2026: AI moves into production. https://www.splunk.com/en_us/blog/learn/ai-risk-management.html
Sukharevsky, A., Krivkovich, A., Gast, A., Storozhev, A., Maor, D., Mahadevan, D., Hämäläinen, L., & Durth, S. (2025). The agentic organization: A new operating model for AI. McKinsey & Company. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-agentic-organization-contours-of-the-next-paradigm-for-the-ai-era
Tenable Research. (2025, June 17). Cloud misconfigurations expose sensitive data and secrets. Tenable. https://www.tenable.com/press-releases/tenable-research-finds-pervasive-cloud-misconfigurations-exposing-critical-data-and-secrets
The Darktrace Community. (2024, May 13). The state of AI in cybersecurity: The impact of AI on cybersecurity solutions. Darktrace. https://www.darktrace.com/blog/the-state-of-ai-in-cybersecurity-the-impact-of-ai-on-cybersecurity-solutions
Uliss, R. (2024, October 31). Exploited but not forgotten: Takeaways from CISA’s 2023 vulnerability report. The National CIO Review. https://nationalcioreview.com/?p=53583
World Economic Forum. (2026). Global cybersecurity outlook 2026. https://www.weforum.org/publications/global-cybersecurity-outlook-2026/
About the Author
Joshua Copeland is the Director of Cybersecurity at the Crescendo, adjunct professor, speaker, and author of Unpopular Opinion: Burning Down the Bullsh*t to Rebuild Cybersecurity. He teaches graduate-level cybersecurity courses and is a published researcher. Through his writing, speaking, and the Criticality Live podcast, he is known for challenging conventional security thinking and focusing on turning visibility into real-world control
Joshua can be reached online at https://www.linkedin.com/in/joshuacopeland and at our company website https://www.crescendo.ai/
