Microsoft 365 Copilot Faces Zero-Click Data Exfiltration Threat Through EchoLeak AI Vulnerability

Security researchers have identified a sophisticated artificial intelligence vulnerability dubbed EchoLeak that enables attackers to extract sensitive information from Microsoft 365 Copilot without requiring any user interaction

Microsoft 365 Copilot Faces Zero-Click Data Exfiltration Threat Through EchoLeak AI Vulnerability

Security researchers have identified a sophisticated artificial intelligence vulnerability dubbed EchoLeak that enables attackers to extract sensitive information from Microsoft 365 Copilot without requiring any user interaction. This critical security flaw, designated CVE-2025-32711 with a CVSS score of 9.3, has been resolved by Microsoft as part of their June 2025 Patch Tuesday update.

Vulnerability Overview

The EchoLeak attack represents a Large Language Model (LLM) Scope Violation that facilitates indirect prompt injection, causing unintended AI behavior. Discovered and reported by Aim Security, this vulnerability allows unauthorized attackers to access and disclose sensitive organizational data through network communications, despite Microsoft's security controls.

Microsoft addressed the issue proactively without requiring customer intervention, and no evidence suggests malicious exploitation in real-world scenarios. The vulnerability was included among 68 security fixes in Microsoft's June 2025 patch release.

Attack Mechanism

The exploit operates through a multi-stage process that leverages Microsoft 365 Copilot's Retrieval-Augmented Generation (RAG) engine. Attackers embed malicious prompt instructions within markdown-formatted content, such as emails, which the AI system processes alongside legitimate organizational data.

The attack sequence proceeds as follows:

Initial Injection: Attackers transmit seemingly benign emails containing embedded LLM scope violation exploits to employee Outlook inboxes.

User Interaction: Employees ask Microsoft 365 Copilot routine business questions, such as requests to summarize earnings reports or analyze documents.

Scope Violation: Copilot's RAG engine combines the malicious external input with sensitive internal data within the LLM processing context.

Data Exfiltration: The compromised system leaks confidential information to attackers through Microsoft Teams and SharePoint URLs.

The attack's "zero-click" nature means no user interaction beyond normal Copilot usage is required. The vulnerability exploits Copilot's default behavior of integrating content from multiple sources without maintaining proper trust boundaries between external and internal data.

Security Implications

EchoLeak poses significant risks because it manipulates how Copilot retrieves and prioritizes information using internal document access privileges. Attackers can influence this process through carefully crafted payload prompts embedded in innocuous sources like meeting notes or email communications.

The vulnerability enables extraction of the most sensitive data from the LLM's current context, with the AI system inadvertently assisting in identifying and leaking critical information. The attack works effectively in both single-turn and multi-turn conversations, expanding its potential impact.

Model Context Protocol Vulnerabilities

Concurrent research from CyberArk has revealed additional AI security concerns involving the Model Context Protocol (MCP) standard. The company identified a tool poisoning attack called Full-Schema Poisoning (FSP) that extends beyond traditional description field attacks to encompass the entire tool schema.

Security researcher Simcha Kosman emphasized that every component of the tool schema represents a potential injection point, not just the commonly focused description fields. This broader attack surface significantly increases the risk of successful tool poisoning attempts.

The vulnerability stems from MCP's "fundamentally optimistic trust model" that assumes syntactic correctness equals semantic safety and presumes LLMs only process explicitly documented behaviors.

Advanced Tool Poisoning Attacks

Tool poisoning attacks and FSP can be combined to create Advanced Tool Poisoning Attacks (ATPA), where attackers design tools with benign descriptions but deploy fake error messages. These deceptive messages manipulate the LLM into accessing sensitive information, such as SSH keys, under the pretense of resolving technical issues.

A critical security flaw in the popular GitHub MCP integration exemplifies these risks. The vulnerability allows attackers to compromise user agents through malicious GitHub issues, potentially leading to data leakage from private repositories when users request the model to examine repository issues.

The toxic agent flow occurs when malicious payloads embedded in public repository issues execute automatically as soon as the agent queries the issue list. This architectural vulnerability cannot be resolved through server-side patches alone and requires users to implement granular permission controls and continuous monitoring.

MCP Rebinding Attack Vector

The growing adoption of MCP as enterprise automation infrastructure has introduced new attack vectors, including DNS rebinding attacks that exploit Server-Sent Events (SSE) protocols used for real-time communication between MCP servers and clients.

DNS rebinding attacks deceive victim browsers into treating external domains as internal network resources, effectively bypassing same-origin policy restrictions. These attacks typically initiate when users visit malicious websites through phishing or social engineering campaigns.

The MCP rebinding attack leverages adversary-controlled websites to access internal resources on victims' local networks, enabling interaction with MCP servers running on localhost through SSE connections. This technique allows attackers to pivot from external phishing domains to target internal MCP servers and exfiltrate confidential data.

Research from the Straiker AI Research (STAR) team demonstrates how attackers can abuse SSE's persistent connections to bridge external threats with internal system access.

Mitigation Strategies

To address these emerging AI security threats, organizations should implement several protective measures:

Authentication Enforcement: Implement robust authentication mechanisms on MCP servers to prevent unauthorized access.

Origin Header Validation: Validate the "Origin" header on all incoming MCP server connections to ensure requests originate from trusted sources.

Permission Controls: Establish granular permission systems that limit agent access to only necessary repositories and resources.

Continuous Monitoring: Implement ongoing auditing of interactions between AI agents and MCP systems to detect suspicious activity.

Protocol Updates: Transition from deprecated SSE protocols to Streamable HTTP, which offers improved security against DNS rebinding attacks.

Industry Impact

These discoveries highlight the evolving security challenges associated with AI agent deployment in enterprise environments. As LLM agents become more autonomous and capable, their interactions with external tools and protocols will increasingly define operational safety and reliability.

The research underscores critical blind spots in current AI implementations and emphasizes the need for enhanced security frameworks that account for the unique risks posed by AI-powered automation systems. Organizations deploying AI agents must balance functionality with security considerations to prevent unintended data exposure and system compromise.