A zero-click vulnerability in Microsoft 365 Copilot posed a data leak risk, but Microsoft swiftly addressed the issue, assuring users of their safety.
Microsoft 365 Copilot, the AI-powered office assistant, has recently come under scrutiny from cybersecurity experts. A report revealed a critical vulnerability allowing a 'zero-click attack' – meaning sensitive user data could be compromised without the user clicking any links or downloading any files.
This alarming information was disclosed by AIM Security, a cybersecurity startup. They named the vulnerability 'EchoLeak,' a type of Cross-Prompt Injection Attack (XPIA). This attack cleverly manipulated Copilot without user interaction, forcing it to extract private user information.
What happened?
Microsoft 365 Copilot, acting as an AI-powered assistant within Office apps like Word, Excel, and Outlook, could be controlled via a simple text email. AIM Security researchers demonstrated that a malicious email containing hidden instructions could, upon processing by Copilot, extract user information from OneDrive, Outlook, or Teams and send it to the attacker.
This attack was considered extremely dangerous because it required no user action – no link clicks or file downloads. The attack could activate simply upon email receipt.
Agentic Capabilities Become a Threat
AI chatbots, such as Microsoft Copilot, possess 'agentic capabilities' – meaning they don't just respond; they actively access user files, create schedules, read emails, and respond to them.
When these capabilities fall under malicious instructions, the severity of the threat increases significantly. EchoLeak exploited this weakness, using Copilot 'against itself' – making Copilot itself the means of extracting and sending data.
How was the attack executed?
1. Sending a hidden prompt via email
The attacker sent an email containing covert instructions hidden within alt text or markdown.
2. Utilizing trusted domains in Teams or Outlook
Copilot connects to trusted platforms, potentially leading it to consider suspicious links as trustworthy.
3. Data transfer via GET Request
Once the instructions were processed, Copilot could extract data from the user's system and send it to the attacker's server via a GET request.
4. Attack without user interaction
AIM demonstrated how an attack could be launched via a Teams message without user prompting – a true zero-click attack.
Microsoft's Response
Microsoft acknowledged the report, thanked AIM Security, and stated that the vulnerability was patched in May 2025. Microsoft claims no customer experienced actual harm, and they patched the bug on the server-side in a timely manner.
A Microsoft spokesperson stated:
'We took immediate action, and no customer was at risk. We appreciate the help of security researchers.'
Expert Opinions
- AIM Security: 'EchoLeak serves as a warning that AI systems can have all the vulnerabilities of a traditional application – and potentially more.'
- AI Risk Analyst, Mumbai: 'AI agents like Copilot need limited permissions. Granting access to data every time is now extremely risky.'
Possible preventative measures
1. Define Copilot access limits
Limit automated data access from services like OneDrive, Outlook, or Teams.
2. Monitor AI chatbot behavior
Use logging and monitoring to track the types of requests the chatbot is processing.
3. Enhance email formatting security
Filter inputs like markdown, alt-text, or embedded code within the chatbot interface.
4. Regular updates and patching
Immediately apply all security updates released by platforms like Microsoft.