
Logue was able to demonstrate (in a proof of concept), creating financial sheets with crafted instructions in white text. A successful exploit led the user to the attacker-controlled login. “When I asked M365 Copilot to summarize the document, it no longer told me it was about financial information and instead, responded with an excuse that the document contained sensitive information and couldn’t be viewed without proper authorization or logging in first,” Logue said.
The bigger threat of indirect prompt injection
The incident underscores that the risk goes beyond simple “prompt injection,” where a user types malicious instructions directly into an AI. Here, the attacker hides instructions inside document content that gets passed into the assistant without the user’s awareness. Logue described how the hidden instructions use progressive task modification (e.g, “first summarise, then ignore that and do X”) layered across spreadsheet tabs.
Additionally, the disclosure exposes a new attack surface where the diagram-generation feature (Mermaid output) becomes the exfiltration channel. Logue explained that clicking the diagram opened a browser link that quietly sent the encoded email data to an attacker-controlled endpoint. The transfer happened through a standard web request, making it indistinguishable from a legitimate click-through in many environments.
