editorially independent. We may make money when you click on links
to our partners.
Learn More
Microsoft’s newly announced Connected Agents feature in Copilot Studio is already raising security concerns after researchers demonstrated how it can be abused to gain stealthy backdoor access to enterprise systems.
The feature, unveiled at Build 2025, enables AI agents to connect and reuse each other’s capabilities — but attackers can potentially exploit it to bypass visibility controls and trigger sensitive actions without detection.
“Always assume that any agent that enables the connected agents feature will be accessible to the entire internet anonymously,” said Zenity Labs researchers in their analysis.
Copilot Studio’s Agent Visibility Problem
Copilot Studio is increasingly used to automate customer support, internal workflows, and business communications, often with access to sensitive data or privileged tools.
Because Connected Agents is enabled by default on new agents, many organizations may already be exposed without realizing it.
When active, the Connected Agents feature allows an agent’s knowledge, tools, and topics to be accessed by any other agent within the same environment.
Copilot Studio currently provides no built-in way to see which agents have connected to a given agent.
How Connected Agents Enable Silent Abuse
In proof-of-concept demonstrations, Zenity Labs showed how attackers can create a malicious backdoor agent and connect it to a legitimate, trusted agent within the same Copilot Studio environment.
Once connected, the malicious agent can invoke the trusted agent’s tools and capabilities without user interaction or visible audit events.
The risk becomes severe when the targeted agent has access to tools, such as the ability to send emails from an official company domain or query sensitive internal data.
In one scenario, researchers demonstrated how a compromised or insider-created agent could silently trigger an email-sending tool, enabling large-scale phishing or impersonation campaigns that appear to originate from the organization itself.
Because Connected Agent invocations generate no messages in the target agent’s activity tab, standard monitoring and audit mechanisms fail to capture the abuse.
This allows attackers to operate covertly, avoiding detection while leveraging legitimate infrastructure.
From a security standpoint, the root issue is excessive trust combined with insufficient visibility.
Connected Agents implicitly assumes that all agents within an environment are equally trustworthy, collapsing privilege boundaries and enabling lateral movement between agents.
Zenity Labs confirmed that this is not just a theoretical issue: the attack paths have been successfully demonstrated, and exploitation requires no advanced techniques beyond basic agent creation and configuration.
Reducing Risk From Connected Agents
With Connected Agents enabled by default and limited visibility into agent-to-agent interactions, organizations need to take deliberate steps to reduce unintended exposure.
- Audit all Copilot Studio agents to identify where Connected Agents is enabled and assess associated risk.
- Disable Connected Agents on agents that expose unauthenticated tools, sensitive knowledge, or business-critical capabilities.
- Enforce tool-level authentication so sensitive actions require explicit user credentials rather than inherited permissions.
- Restrict agent creation, publishing, and modification to approved users and separate development and production environments.
- Review and limit agent knowledge sources, publishing channels, and access scopes to align with least-privilege principles.
- Monitor tenant and audit logs for unusual agent behavior and treat Connected Agents as public until stronger safeguards exist.
Collectively, these steps help reduce hidden trust relationships, improve visibility, and protect against Connected Agents becoming an unmonitored path to sensitive systems.
When AI Productivity Outpaces Security
The Connected Agents issue highlights a broader challenge organizations face as AI platforms evolve: productivity and automation features are often deployed more quickly than the security frameworks needed to govern them.
As AI agents become more autonomous and interconnected, even minor misconfigurations can introduce indirect access paths that bypass traditional controls and audit mechanisms.
This incident underscores the need for AI-specific governance, continuous monitoring, and threat modeling, as agents increasingly act on behalf of users and other systems, amplifying the risk of implicit trust when visibility is limited.
These risks make it increasingly important for organizations to establish clear generative AI policies that define acceptable use, access controls, and security responsibilities as AI capabilities expand.
