Copilot Read Emails That Should Have Been Protected

Microsoft confirms that a flaw in Microsoft 365 Copilot Chat made it possible for the AI chatbot to retrieve and summarize emails marked as confidential — without respecting the applicable data protection rules. This was reported by TechCrunch.

The flaw affected the so-called "work tab" feature in Copilot Chat, which is designed to read and analyze content across Microsoft 365 apps like Outlook, Word, and Excel. While the inbox was protected, folders for sent messages and drafts — including threaded email replies — were vulnerable.

Directly Impacted Corporate Governance Controls

For organizations using Microsoft 365 as a platform for sensitive communication, the trust that DLP policies and sensitivity labels actually work is fundamental. The flaw weakened that very trust.

According to TechCrunch, Microsoft emphasized in an official statement that the flaw did not grant any access to information that users would not otherwise be authorized to see — but that Copilot returned content that, according to the company's own rules, should have been excluded from AI processing.

We identified and remediated an issue where Microsoft 365 Copilot Chat could return content from emails marked as confidential — this did not grant access to information they were not already authorized to see.

It is worth noting that Microsoft's classification of the incident as an "advisory" — rather than a full-scale security breach — indicates a limited scope. Nevertheless, the consequences of AI systems bypassing security layers explicitly set up by businesses are potentially serious.

Microsoft Bug Exposed Confidential Emails to Copilot AI

Microsoft Rolls Out Global Fix

Microsoft states that the company began distributing a configuration update to enterprise customers worldwide in early February, and that the root cause is addressed as of February 20, 2026. The company continues to monitor the situation and is contacting affected commercial customers directly.

At the same time, Microsoft refuses to disclose how many organizations or individuals were affected. A spokesperson has, according to TechCrunch, declined to comment on the extent of the damage.

Microsoft will not say how many customers were affected — but full remediation is expected to be in place by February 24.
Microsoft Bug Exposed Confidential Emails to Copilot AI

Broad Context: AI and Data Handling Under Pressure

The incident occurs during a period where AI tools' handling of sensitive corporate data is under increasing scrutiny. According to Microsoft's own Data Security Index for 2026, generative AI is involved in 32 percent of organizations' data incidents, while 47 percent of businesses have now introduced AI-specific security controls — up eight percentage points from 2025.

European skepticism toward cloud-based AI tools is illustrated by the European Parliament recently blocking AI tools on its devices, specifically due to concerns related to data uploads to the cloud.

For the healthcare sector, the issue is particularly acute. If health data ends up in AI systems without sufficient legal frameworks like a BAA (Business Associate Agreement), it could constitute a violation of HIPAA regulations — resulting in notification requirements and potential sanctions.

No Similar Incidents Confirmed Among Competitors

As of today, there are no confirmed security incidents of a similar nature related to Google Gemini or OpenAI's ChatGPT in available sources. A direct comparison is therefore not possible on a responsible basis.

The Microsoft flaw is currently the only documented incident of this type among the major AI players in the enterprise market — giving the case extra weight in the debate over how ready enterprise AI platforms actually are to handle critical business information.