Zero-Click AI Vulnerability Exposes Major Security Risks

The EchoLeak vulnerability, identified as CVE-2025-32711, is a significant zero-click AI security issue affecting Microsoft 365 (M365) Copilot. This flaw allows hackers to extract private information without user interaction by exploiting the AI's normal operations. Microsoft has addressed this vulnerability in their June 2025 update, which included fixes for 68 issues. EchoLeak was discovered by Aim Security, highlighting how large language models (LLMs) can be manipulated to breach their operational boundaries through deceptive instructions embedded in seemingly innocuous content like emails.
The attack leverages the AI's ability to integrate data from various sources, such as Outlook and SharePoint, turning a beneficial feature into a data leakage vector. This method is particularly dangerous as it requires no user action and can occur during both short and extended interactions. The vulnerability underscores the risks associated with AI design, where hackers can influence data retrieval processes by embedding malicious instructions in benign-looking documents.
Additionally, CyberArk has identified a related threat called Full-Schema Poisoning (FSP), which involves tool poisoning attacks (TPA) that exploit the trust systems place in tool descriptions. These attacks can deceive AI into leaking sensitive information by presenting fake error messages. As AI tools become more sophisticated, their security depends heavily on how they interact with other systems and data.
The report also discusses the broader implications of AI vulnerabilities, such as those found in GitHub's use of Microsoft Cloud Platform (MCP). These vulnerabilities can lead to significant security risks, including DNS rebinding attacks, which trick browsers into treating external sites as part of the internal network, allowing hackers to bypass security measures and access private data.
Overall, the EchoLeak vulnerability and related issues highlight the critical need for robust security measures in AI systems, emphasizing the importance of controlling access and monitoring AI interactions to prevent data breaches.
Stay secure — stay Wavasec. 🔐