Financial institutions are rapidly deploying AI, but new research suggests many banks may be securing the wrong layer of the stack.
Liquibase researchers warn that while organizations focus heavily on AI models and APIs, the database layer may be one of the most exposed parts of modern financial infrastructure.
“Governance for agents has to move in three directions at once,” said Adam Markowitz, CEO and co-founder of Drata, in an email to eSecurityPlanet.
He explained, “Down the stack, to where irreversible actions happen. Up the stack, to the compliance frameworks that define what governed action means. And across the stack, to every system where agents act.”
Adam also added, “Most of the industry is still working in one direction at a time. That’s the conversation that needs to catch up.”
Key Takeaways About AI Banking Risks
- Researchers warn that banks may be overlooking the database layer while focusing AI security efforts on models and APIs.
- Autonomous AI agents could manipulate records, workflows, and business logic in ways that appear legitimate and are difficult to detect.
- AI-driven state corruption may create operational, compliance, and forensic challenges for frameworks like SOX, PCI DSS, and SOC 2.
- Trusted credentials, approved workflows, and legitimate automation tools could allow malicious AI activity to blend into normal operations.
- Financial organizations are being urged to adopt stronger database governance, zero trust controls, immutable logging, and continuous monitoring.
AI Database Risks and Recommended Controls
| AI Risk Area | Potential Impact | Recommended Controls |
| Unauthorized schema changes | Corrupted financial records and reporting errors | Policy-enforced change management and approvals |
| Manipulated transaction workflows | Fraud, reconciliation issues, operational disruption | Continuous monitoring and anomaly detection |
| AI misuse of valid credentials | Hard-to-detect malicious activity | Zero trust and least privilege access |
| Weak audit visibility | Compliance and forensic investigation challenges | Immutable logging and cryptographic audit trails |
| Excessive AI access to production systems | Increased exposure to operational risk | Database segmentation and restricted write access |
| AI-driven state corruption | Long-term integrity and trust issues | Recovery testing and operational resilience planning |
AI Creates New Risks for Financial Systems
Many financial institutions focus AI governance on models and APIs while paying less attention to the databases where critical financial operations and records are stored.
Researchers cautioned that this creates a significant blind spot as AI systems become more autonomous and increasingly integrated into enterprise banking environments.
The researcher’s findings describe Mythos-class AI systems as autonomous agents capable of identifying weaknesses, chaining attacks, and executing actions across enterprise environments at machine speed.
They suggest that this changes the nature of operational risk for financial institutions because the primary threat is no longer limited to traditional data theft or ransomware-style disruption.
The Risk of AI-Driven State Corruption
Instead, the researchers caution that AI-driven attacks may increasingly focus on silent state corruption, including unauthorized schema changes, manipulated records, and altered business logic that appear legitimate.
In banking environments, databases serve as both the execution point for AI-driven actions and the permanent record of those actions.
Researchers said autonomous AI agents could manipulate transaction workflows, bypass business controls, or create inconsistencies that are difficult to detect and reconcile downstream.
Because AI-driven actions can use trusted apps, approved workflows, and valid credentials, malicious activity may blend into normal operations and become harder for security teams to detect.
Compliance and Accountability Challenges
The findings raise concerns around AI governance, software supply chain security, and forensic accountability, particularly for compliance frameworks like SOX, PCI DSS, and SOC 2 that rely on trusted audit trails, change management, and data integrity.
The researchers said that as AI systems gain greater autonomy in enterprise environments, governance controls must extend deeper into the database layer to preserve operational accountability and support audits and investigations.
Reducing AI Exposure
To reduce risk with AI adoption, financial organizations should adopt stronger validation, monitoring, and access control measures that improve accountability and operational resilience.
- Implement policy-enforced database change management that validates modifications before execution rather than relying solely on manual reviews.
- Use cryptographically verifiable audit trails, immutable logging, and centralized governance controls to improve accountability and forensic visibility.
- Continuously monitor schema changes, transaction activity, database queries, and downstream state changes for signs of unauthorized manipulation or other anomalous behavior.
- Apply zero trust and least privilege principles to AI agents, automated workflows, third-party integrations, and sensitive database environments.
- Segment critical database systems and limit direct AI write access to production environments wherever possible.
- Strengthen identity validation, multi-party approvals, and policy-based controls around high-risk schema changes and privileged database operations.
- Test incident response, database recovery, and operational resilience plans to ensure organizations can quickly detect and recover from AI-driven state corruption events.
Together, these measures can help organizations build operational resilience, strengthen accountability, and reduce exposure to AI-driven manipulation, data integrity failures, and downstream system disruptions.
AI Operations Require Stronger Governance
As AI systems evolve from analytical assistants into tools capable of executing operational tasks, security and governance concerns are expanding beyond model behavior and application security to the underlying infrastructure and data systems AI interacts with directly.
Liquibase researchers say organizations must secure not only AI models, but also the databases, workflows, and enterprise systems where AI-driven actions can affect business operations and compliance.
This also reinforces the importance of zero trust principles, as organizations work to limit AI access, continuously validate activity, and reduce risk with AI adoption.
