
The unseen attack vector: Model drift and shadow AI
The new, most critical threats in our extended supply chain are now entirely digital and almost invisible to traditional controls. I am not talking about a simple phishing attack or an unpatched server. I am talking about risks embedded in the very fabric of our vendor’s operations through GenAI adoption.
First, consider shadow AI. Your key software vendor is using a public LLM to rapidly generate new code for your core product. They didn’t tell you because it sped up their delivery timeline. But now, that model’s proprietary training data, potentially scraped from compromised sources, is woven into your production environment. If a third-party developer incorporates noncompliant code from an LLM, your enterprise is immediately exposed to intellectual property, licensing and security risks — risks that current due diligence contracts simply can’t catch (see the discussion on AI-generated liabilities in the Journal of AI Risk).
Second, we must recognize model drift. A vendor’s core business logic, like fraud detection or optimization, might rely on a deployed AI model. Over time, that model can drift in its behavior due to subtle changes in its operating environment or data flow, potentially exposing confidential data or introducing biases that violate new regulatory requirements. This is a subtle systemic risk that an annual audit cannot flag. CISOs need to understand that the supply chain risk surface is now fluid, defined by the behavior of external algorithms, not just external firewalls.
