editorially independent. We may make money when you click on links
to our partners.
Learn More
Microsoft has confirmed a bug in Microsoft 365 Copilot Chat that allowed the AI assistant to summarize emails labeled as confidential, even when sensitivity labels and data loss prevention (DLP) policies were in place.
The issue, first identified on Jan. 21, 2026 and tracked internally as CW1226324, impacted Copilot’s “work tab” chat feature.
“Without proper due diligence on the data handling by the AI, sensitive information may not be treated with the rigor it should,” said Melissa Ruzzi, director of AI at AppOmni in an email to eSecurityPlanet.
She added, “To mitigate these threats, the first important action is to make sure employees are trained on best practices for using AI.”
Melissa explained, “Give them guidelines on what they should pay attention to, empowering them to not only properly use AI, but also raise concerns as issues arise. This can help detect problems early.”
Inside the Microsoft Copilot Bug
The incident highlights the growing complexity of governing AI capabilities embedded within modern SaaS platforms.
As organizations increasingly integrate generative AI into core productivity workflows, traditional security and compliance controls must evolve to account for how large language models (LLMs) access, process, and summarize enterprise data.
Copilot Chat — Microsoft’s AI-powered assistant integrated across Outlook, Word, Excel, PowerPoint, and OneNote — is designed to help users surface, synthesize, and contextualize organizational information.
By drawing on content such as emails, documents, meeting notes, and other Microsoft 365 data, Copilot enables users to generate summaries, draft responses, and extract insights from large volumes of information.
Its value proposition depends on broad contextual access to enterprise data — but that same breadth of access also increases the importance of strict policy enforcement.
How the Copilot Bug Bypassed DLP Controls
According to Microsoft, the issue stemmed from an unspecified code error affecting Copilot Chat’s “Work” tab.
The flaw allowed the assistant to process and summarize emails stored in Sent and Draft folders even when those messages were protected by confidentiality (sensitivity) labels and governed by active DLP policies.
In effect, Copilot analyzed and summarized content that organizations had explicitly marked as restricted and expected to be excluded from automated AI processing.
Why the Incident Raises Compliance Concerns
“This did not provide anyone access to information they weren’t already authorized to see,” a Microsoft spokesperson said in a message to BleepingComputer.
In other words, users could only see summaries of content they already had permission to access within Microsoft 365.
However, the behavior deviated from Copilot’s intended design, which is meant to respect sensitivity labels and DLP controls by excluding protected content from AI-driven retrieval and summarization workflows.
The situation reflects a failure in policy enforcement within an AI-driven workflow, where established security controls did not operate as intended once AI processing was introduced.
When there is any misalignment between access controls, data protection policies, and the logic governing AI retrieval and summarization, organizations face heightened compliance, governance, and regulatory risks.
Microsoft began rolling out a fix in early February 2026 and has stated that it continues to monitor deployment to ensure the issue is fully resolved.
Mitigating AI Data Security Risks
As AI-powered assistants become embedded in everyday productivity workflows, organizations must ensure that governance and security controls evolve alongside them.
Traditional safeguards such as DLP policies, sensitivity labels, and access restrictions are only effective if they are consistently enforced within AI-driven features like Copilot.
Proactive validation, monitoring, and risk management are essential to prevent unintended exposure or misuse of sensitive information.
- Validate that DLP policies and sensitivity labels are properly enforced within Copilot by testing how confidential content is handled across email and document workflows.
- Restrict Copilot access using role-based controls and conditional access policies to limit AI processing to appropriate users, devices and trusted environments.
- Review and harden Copilot configuration settings to ensure alignment with corporate data protection, compliance and retention policies.
- Enable comprehensive logging and integrate Copilot telemetry into SIEM or other monitoring platforms to detect anomalous AI-driven data access or summarization patterns.
- Isolate highly sensitive workloads and apply data minimization practices to reduce unnecessary exposure of regulated or confidential content to AI tools.
- Incorporate AI-enabled SaaS features into formal risk assessments, vulnerability management programs and adversarial testing exercises to validate enforcement boundaries.
- Test incident response plans and build playbooks around scenarios with unintended AI processing of sensitive data.
Together, these measures help limit the blast radius of unintended AI data exposure while strengthening organizational resilience against emerging risks in AI-enabled SaaS environments.
AI as a New Data Risk Layer
The Copilot incident highlights that AI features function as additional data-processing layers within enterprise systems and should be governed accordingly.
As generative AI becomes more integrated into SaaS platforms, security teams need to ensure that existing controls — such as policy enforcement and monitoring — are consistently applied to AI-driven workflows, not just traditional user activity.
This shift in AI use underscores the need for zero-trust solutions that continuously verify access and enforce granular controls across users, devices, applications, and data.
