editorially independent. We may make money when you click on links
to our partners.
Learn More
The rise of agentic artificial intelligence (AI) is fundamentally reshaping how software is developed, tested, and secured.
In a recent discussion with Jeremy Katz, VP of Code Security at Sonar, key insights emerged about how AI-driven workflows are accelerating development while introducing new security challenges that organizations must address.
Agentic Workflows in Modern Development
Agentic workflows represent a shift from human-directed coding to AI-driven execution.
Instead of developers writing code line by line, AI agents are given high-level objectives and autonomously generate code, create tests, and iterate based on results.
“What we’re starting to see is more workflows where agents are given a high-level task and they execute it. This means they write the code, write the test, run the test, and iterate as they see issues along the way,” said Katz.
This allows teams to move from concept to working output in significantly less time, often returning usable code within a short cycle.
While this increases efficiency, it also changes the developer’s role — from direct creator to reviewer and guide — requiring new approaches to oversight and validation.
Security Implications of Machine-Speed Code Generation
When code is generated at machine speed, the traditional depth of human understanding is reduced.
Developers may no longer fully grasp every implementation detail, as AI systems handle much of the execution.
This creates a risk where incorrect assumptions introduced early in the process propagate throughout the codebase without being challenged.
Unlike human developers, who naturally question and revise their work, AI agents tend to follow initial assumptions rigidly. This can result in compounding errors, making early-stage validation critical.
Limitations of Traditional Code Review and Scanning
Traditional security practices, such as post-development code scanning in continuous integration (CI), are becoming less effective in this new paradigm.
By the time vulnerabilities are detected, they may already be deeply embedded in AI-generated code, making remediation more complex and costly.
Additionally, the volume and speed of generated code make it unrealistic for humans to manually review everything.
Even with improved prompting techniques — where developers guide AI systems through structured planning — human oversight alone cannot scale to match machine output.
Moving Security into the “Inner Loop”
To address these challenges, organizations must shift security and quality checks earlier in the development lifecycle, often referred to as moving into the “inner loop.”
This means embedding validation directly into the code generation process rather than waiting until later stages.
By enforcing checks in real time, teams can prevent issues such as hardcoded secrets or insecure dependencies from being introduced in the first place.
Early detection reduces the risk of compounding errors and improves overall efficiency, as fixes occur at the point of creation rather than after deployment.
Enabling Real-Time and Autonomous Verification
Emerging approaches focus on integrating automated guardrails and deterministic verification into development workflows.
These systems continuously evaluate generated code against predefined security and quality standards, ensuring compliance as code is written.
Technologies such as automated quality gates and policy enforcement tools allow organizations to define what acceptable code looks like.
Rather than relying solely on human judgment, these systems provide consistent, scalable validation aligned with organizational requirements.
Rethinking the Role of Security Teams
As AI agents take on a larger role in coding, security teams must evolve from gatekeepers to enablers.
Their focus should shift toward designing guardrails, defining standards, and embedding security controls into development workflows.
Resisting AI adoption is not a viable strategy, as organizational pressure for increased speed and efficiency will continue to grow.
Instead, security teams should collaborate closely with developers to ensure that innovation does not come at the expense of risk management.
Future Outlook: Convergence and Collaboration
Looking ahead, the boundaries between development, security, and operations are expected to continue blurring.
Developers will take on greater responsibility for security, while shared standards and automated verification systems will ensure consistency across teams.
AI agents will become integral collaborators, augmenting human capabilities rather than replacing them.
Success will depend on how effectively organizations adapt their processes, embrace automation, and establish clear standards for secure software development.
Agentic AI is accelerating software development at an unprecedented pace, but it also demands a fundamental rethinking of security practices.
As organizations rethink security in the age of AI-driven development, zero trust is a model centered on continuous verification and the elimination of implicit trust across systems that can help reduce organizational risk.
