Microsoft has confirmed that a bug in Microsoft 365 Copilot allowed the AI assistant to read and summarize emails marked with sensitivity labels — the exact labels designed to keep automated tools out.
What Happened
Since around January 21, Copilot’s “work tab” chat feature was picking up messages from users’ Sent Items and Drafts folders, ignoring confidentiality labels and Data Loss Prevention (DLP) policies. If you asked Copilot to summarize recent emails, it would happily include messages it should never have touched.
The issue is tracked as CW1226324.
Why It Matters
Organizations using Microsoft Purview sensitivity labels — especially in regulated industries like healthcare, legal, and finance — rely on DLP policies as a hard boundary. This bug turned that boundary into a suggestion. Anything in Sent Items or Drafts tagged as confidential was fair game for Copilot to surface in a chat summary.
For firms handling privileged communications or protected health information, that is a potential compliance incident.
Current Status
Microsoft says a code error was responsible and began rolling out a fix in early February. As of February 18, the company is still monitoring the deployment and reaching out to affected users to verify the fix.
What to Do
- Check your Copilot audit logs for any confidential content surfaced between January 21 and now.
- Verify the fix is active in your tenant — Microsoft is rolling it out in stages.
- Review your DLP strategy. If your compliance posture assumes DLP labels are enforced by all Microsoft tools, this is a reminder to test that assumption regularly.