editorially independent. We may make money when you click on links
to our partners.
Learn More
As organizations accelerate their AI adoption, many are turning to generative AI (GenAI) as a cornerstone of their security strategy.
But according to Melissa Ruzzi, Director of AI at AppOmni, relying on GenAI alone may create more gaps than it solves.
“GenAI is non-deterministic and language-focused, so it’s not the most appropriate tool in certain cases,” Ruzzi explained in an email to eSecurityPlanet.
She added, “It’s like replacing a calculator by rolling a dice when you want to sum up numbers.”
Her perspective reflects a growing realization across the industry: while GenAI is powerful, it is only one piece of a much larger AI toolkit required to effectively manage modern cybersecurity challenges.
Why GenAI Alone Falls Short in Security
Security teams are facing unprecedented data volume and complexity, with millions of signals generated across cloud environments, SaaS platforms, and endpoints.
While GenAI can help summarize and interpret data, it struggles with deterministic tasks, deep mathematical analysis, and high-volume correlation.
Ruzzi emphasized that success in AI-driven security depends less on model sophistication and more on how well teams combine different approaches. “Security domain knowledge, not sophisticated AI algorithms, is the primary driver of success,” she said.
This means organizations must move beyond the assumption that large language models alone can solve security challenges and instead adopt a more balanced approach that includes traditional machine learning, statistics, and data science.
Why GenAI Struggles With Raw Security Data
One of the key limitations of GenAI in security is its reliance on language and probabilistic outputs. When applied to raw security data without context or structure, it can produce generic or even misleading results.
Ruzzi noted that inserting large volumes of unrefined data into GenAI systems often limits their effectiveness. Without clean, high-fidelity data and proper domain context, AI outputs lack the depth needed for actionable insights.
This is why many organizations are beginning to rethink their strategies — shifting from GenAI-centric approaches to more integrated AI frameworks.
Applying the Right AI to the Right Problem
To get meaningful value from AI, organizations need to apply the right tools to the right problems. That includes combining:
- Traditional machine learning for pattern detection and anomaly identification.
- Advanced statistics for quantitative analysis and risk scoring.
- GenAI and AI agents for contextual understanding, summarization, and workflow automation.
This layered approach allows teams to bridge the gap between numerical analysis and contextual insight — something no single AI method can achieve on its own.
Ruzzi also highlighted that security risk is inherently contextual.
It’s not just about detecting anomalies, but understanding relationships — such as how a user’s permissions interact with sensitive data and system configurations.
Combining multiple AI techniques enables deeper correlation and more accurate risk assessment.
How to Use AI Effectively in Security Teams
Organizations looking to operationalize AI in cybersecurity should focus on several key areas.
Prioritize Data Quality Over Model Complexity
AI systems are only as effective as the data they rely on.
Clean, comprehensive datasets — including audit logs, configurations, and API telemetry—are far more valuable than adopting increasingly complex models.
Poor data quality leads to hallucinations and unreliable outputs, regardless of the model used.
Shift From Alert Triage to Narrative Insights
Security teams are overwhelmed by alerts. Instead of using AI to investigate issues one by one, organizations should leverage it to surface meaningful patterns and prioritize what matters most.
This allows analysts to focus on high-impact risks rather than sifting through noise.
Bridge Numerical and Contextual Analysis
Pure statistical models lack context, while GenAI lacks precision in numerical analysis.
Combining both enables a more complete understanding of risk, tying together behavior, permissions, and data sensitivity.
Automate Repetitive Risk Assessment Tasks
AI can reduce manual effort by automating correlation and identifying complex risks — such as unauthorized privilege escalations or outdated configurations.
This helps shift teams from reactive investigation to proactive response, reducing time to remediation from weeks to minutes.
Building Smarter AI Security Programs
As AI continues to reshape cybersecurity, the industry is moving toward more integrated and pragmatic approaches.
The focus is shifting away from chasing the most advanced models and toward building systems that combine multiple AI techniques with strong domain expertise.
This evolution also reflects a broader trend: AI is becoming essential for keeping pace with modern threats, but its effectiveness depends on how it is implemented.
Organizations that treat AI as a standalone solution risk overengineering and underdelivering, while those that integrate it thoughtfully into existing workflows will gain the most value.
Ultimately, GenAI is not a replacement for traditional methods — it’s an extension of them.
As security teams refine their strategies, the goal is not to rely on a single tool, but to build a cohesive AI-driven approach that balances automation, accuracy, and human judgment.
As Ruzzi’s insights highlight, the future of cybersecurity isn’t just about adopting AI — it’s about using the right combination of tools, data, and expertise to make it work effectively.
