
Jain pointed out that AI agents are no different. “Unaccounted agents often emerge through sanctioned, low-code tools and informal experimentation, bypassing traditional IT scrutiny until something breaks. You cannot govern what you can’t see. So, we need to understand that the real issue isn’t ‘rogue AI’, it’s invisible AI.”
Info-Tech, he added, “strongly believes that governing AI models or pre-approving agents is no longer enough, because invisible, rogue agents will do tandava (the dance of destruction) at runtime. This is because, when it comes to governing these AI agents, the number is so huge that approval gates will not be sustainable without halting the innovation. Continuous oversight should be the priority for AI governance after setting initial guardrails as part of the AI strategy.”
Perspective, he said, also needs to change: “AI agents are no longer helpful bots. They often operate with delegated yet broad credentials, persistent access, and undefined accountability. This can become a costly mistake as overprivileged agents are the new insider threat. We need to define tiered access for AI agents. While we can’t avoid giving a few people keys to our house to speed up things, if you trust every stranger with your house keys, we wouldn’t be able to blame the locksmith when things go missing.”
