All revisions

Revision #1

System

23 days ago

When Your AI Assistant Becomes the Attack Vector: A Critical Excel Bug Turns Microsoft Copilot Against Its Users

The promise of AI-powered productivity tools has never been more enticing — or more dangerous. As Microsoft races to embed its Copilot AI into every corner of the enterprise software stack, security researchers are uncovering a disturbing pattern: the same broad access that makes AI assistants useful also makes them potent weapons for attackers. A critical vulnerability disclosed in Microsoft's March 2026 Patch Tuesday (CVE-2026-26144) now demonstrates that a single flaw in Excel can weaponize Copilot's Agent mode to silently exfiltrate corporate data without any user interaction [1][2].

But this latest bug is not an isolated incident. It is the third major zero-click or near-zero-click vulnerability targeting Microsoft Copilot in under a year, part of a rapidly emerging attack class that security experts warn could fundamentally reshape enterprise threat modeling.

The Excel Bug: CVE-2026-26144

On March 10, 2026, Microsoft patched CVE-2026-26144, a critical-severity cross-site scripting (XSS) vulnerability in Microsoft Excel with a CVSS score of 7.5 [1][3]. The flaw, caused by improper input neutralization during web page generation, can be exploited to "cause Copilot Agent mode to exfiltrate data via unintended network egress, enabling a zero-click information disclosure attack" [1].

The attack requires no user interaction and has low attack complexity. An unauthenticated attacker could exploit this issue over the network to expose sensitive information by abusing Copilot Agent mode to trigger unintended outbound data exfiltration [2][3]. The Preview Pane is not an attack vector for this vulnerability, according to Microsoft [1].

"Information disclosure vulnerabilities are especially dangerous in corporate environments where Excel files often contain financial data, intellectual property, or operational records," noted analysts at CrowdStrike in their March Patch Tuesday analysis [4]. "If exploited, attackers could silently extract confidential information from internal systems without triggering obvious alerts."

Microsoft stated that exploitation is "currently considered unlikely," with no evidence of public disclosure or active abuse at the time of release [1]. However, the vulnerability was patched as part of a broader March 2026 Patch Tuesday that addressed 82 CVEs, including eight critical flaws and two publicly disclosed zero-days [5][6].

A Pattern Emerges: EchoLeak and the Birth of Zero-Click AI Attacks

CVE-2026-26144 does not exist in a vacuum. It follows a trajectory of increasingly sophisticated attacks against AI assistants that began with the landmark EchoLeak vulnerability (CVE-2025-32711) disclosed by researchers at Aim Security in June 2025 [7][8].

EchoLeak, which carried a CVSS score of 9.3, was the first known zero-click prompt injection exploit against a production LLM system [9]. The attack mechanism was deceptively simple: an attacker sends an innocent-looking email containing hidden instructions targeted at Copilot. Since Copilot scans user emails in the background, it reads the message and follows the embedded prompt — digging into internal files and pulling out sensitive data. The AI then conceals the source of the instructions, making the exfiltration virtually undetectable [7][8].

Timeline of Major AI Assistant Security Vulnerabilities
Source: NVD / Microsoft MSRC / Vendor Advisories
Data as of Mar 11, 2026CSV

Potentially exposed information included anything within Copilot's access scope: chat logs, OneDrive files, SharePoint content, Teams messages, and other organizational data [8]. Aim Security researchers described a novel exploitation technique they called "LLM Scope Violation," where external, untrusted input manipulated the AI model to access and leak confidential data [10].

Microsoft patched the vulnerability server-side in May 2025 without requiring customer action [7]. The fix limited Copilot's ability to follow hidden adversarial prompts in files. There was no evidence the flaw was exploited maliciously in the wild.

Reprompt: One Click to Full Compromise

In January 2026, Varonis Threat Labs disclosed yet another attack vector — dubbed "Reprompt" — that demonstrated how a single click on a legitimate Microsoft link could silently compromise a Copilot session and exfiltrate personal data [11][12].

Unlike EchoLeak's zero-click approach, Reprompt required exactly one user action: clicking a crafted URL. But from that point forward, the attacker maintained persistent control over the Copilot session, even after the chat window was closed [11]. The attack employed three chained techniques: parameter-to-prompt (P2P) injection, double request, and chain-request. Copilot accepts prompts via the 'q' parameter in its URL and executes them automatically when the page loads, giving attackers a direct injection point [11].

Once the session was compromised, attackers could instruct Copilot to summarize all files accessed that day, extract personal details like home addresses, or retrieve planned vacations — all without the user's knowledge [11][12].

"This is an invisible entry point to perform a data-exfiltration chain that bypasses enterprise security controls entirely," Varonis researchers wrote [11]. The exploit was reported to Microsoft in August 2025 and patched by January 13, 2026.

Not Just Microsoft: GeminiJack Exposes an Industry-Wide Problem

The vulnerability pattern extends beyond Microsoft. In June 2025, Noma Labs discovered GeminiJack, a zero-click indirect prompt injection vulnerability in Google Gemini Enterprise and Vertex AI Search [13][14]. The attack demonstrated that any attacker who could share a Google Doc, send a calendar invite, or forward an email could embed hidden instructions that Gemini executed as legitimate commands.

"A single poisoned document could exfiltrate years of email, complete calendar histories, and entire document repositories with zero clicks, zero warnings, and zero DLP alerts," Noma Security researchers reported [13]. Google subsequently separated Vertex AI Search from Gemini Enterprise and restructured its underlying retrieval-augmented generation (RAG) architecture to address the flaw [14].

The cross-platform nature of these vulnerabilities suggests that the problem is architectural, not specific to any vendor. AI assistants that combine broad data access with autonomous action capabilities create a fundamentally new attack surface.

The Copilot Enterprise Attack Surface

The scale of potential exposure is significant. Microsoft reported 15 million paid Copilot seats across 450 million commercial Microsoft 365 users as of its January 2026 earnings call — a 3.3% conversion rate [15]. More than 90% of Fortune 500 companies are now using Microsoft 365 Copilot in some capacity, though most deployments remain in pilot or phased rollout stages rather than enterprise-wide [16][17].

Nearly half of IT leaders surveyed said they lack confidence in their ability to manage Copilot's security and access risks [18]. This concern appears well-founded: Copilot can access all sensitive data available to a user based on their Microsoft 365 permissions. If those permissions are too broad — a common issue in enterprise environments — Copilot can surface proprietary data, business-sensitive information, or customer records to anyone who can manipulate its instructions [18].

In February 2026, Microsoft's own security blog acknowledged the challenge, publishing a detailed "Top 10 Risks" framework for Copilot Studio agents that highlighted dangers including hard-coded credentials in agent configurations, agents using generative orchestration without defined instructions, and uncontrolled data access patterns [19].

The Anatomy of an AI-Powered Attack

What makes these vulnerabilities particularly concerning is how they invert traditional security assumptions. In a conventional attack, the adversary must breach perimeter defenses, escalate privileges, locate sensitive data, and exfiltrate it — each step presenting an opportunity for detection. AI agent attacks compress this into a single step: trick the AI into doing it for you.

"Traditional perimeter defenses, endpoint protection, and DLP tools weren't designed to detect when your AI assistant becomes an exfiltration engine," Noma Security observed in their GeminiJack disclosure [13]. The AI assistant already has the access, the search capability, and the ability to format and transmit data. The attacker only needs to supply the instructions.

This represents what researchers have termed a "trust boundary collapse" — the AI cannot reliably distinguish between legitimate user instructions and adversarial prompts embedded in documents, emails, or spreadsheets it processes [9][10]. Every piece of content the AI ingests becomes a potential command channel.

Microsoft's Defensive Posture

Microsoft has responded to the growing threat with a defense-in-depth strategy. In July 2025, the company published a detailed blog post on its approach to defending against indirect prompt injection attacks, outlining safeguards including prompt filtering, content separation, memory controls, and continuous monitoring [20].

For the March 2026 Patch Tuesday specifically, Microsoft recommended that organizations unable to immediately deploy patches should restrict outbound network traffic from Office applications, monitor unusual network requests generated by Excel processes, and consider disabling or limiting AI-driven automation features such as Copilot Agent mode to reduce exposure [1][2].

The company has also rolled out Copilot Control System capabilities, enabling administrators to manage agent permissions, restrict data access, and apply Microsoft Purview Data Loss Prevention policies to prevent Copilot from processing sensitive files [21].

What Comes Next

The cadence of AI assistant vulnerabilities is accelerating. From EchoLeak in June 2025 to Reprompt in January 2026 to CVE-2026-26144 in March 2026, the interval between major disclosures is shrinking. Each new attack demonstrates novel techniques for exploiting the fundamental tension between AI utility and security.

Media Coverage Volume: Copilot Security Topics
Source: GDELT Project
Data as of Mar 11, 2026CSV

Security researchers expect the trend to intensify as AI agents gain broader capabilities. Microsoft's recent launch of Copilot Cowork — a more autonomous agent framework built with input from Anthropic — and the expansion of Agent Mode across PowerPoint, Word, and other applications signals that the attack surface will only grow [22].

"As AI agents gain broader access to corporate data and autonomy to act on instructions, the blast radius of a single vulnerability expands exponentially," warned researchers at Dark Reading in their analysis of the zero-click exploit landscape [23]. The race between AI capabilities and AI security is far from settled — and the stakes, measured in corporate data and intellectual property, have never been higher.

Recommendations for Enterprise Security Teams

For organizations deploying Microsoft 365 Copilot or similar AI assistants, security experts recommend several immediate actions:

  • Audit permissions aggressively: Ensure Copilot can only access data that users genuinely need. Over-permissioning is the primary amplifier for these attacks [18].
  • Apply patches immediately: CVE-2026-26144 and related vulnerabilities should be treated as high-priority patches, regardless of Microsoft's exploitation likelihood assessment [1][4].
  • Monitor outbound traffic: Watch for unusual network requests originating from Office applications, particularly Excel, which may indicate data exfiltration attempts [2].
  • Consider restricting Agent mode: Until the security posture matures, limiting AI-driven automation features can reduce the attack surface [1].
  • Implement Purview DLP controls: Use Microsoft's own data loss prevention tools to restrict what Copilot can access and process [21].
  • Train employees: Users should understand that AI assistants can be manipulated through malicious content in emails, documents, and spreadsheets they receive [20].

Sources (23)

  1. [1]
    Critical Microsoft Excel bug weaponizes Copilot Agent for zero-click information disclosure attacktheregister.com

    CVE-2026-26144 is a critical-severity information disclosure vulnerability in Microsoft Excel that can cause Copilot Agent mode to exfiltrate data via unintended network egress.

  2. [2]
    March Patch Tuesday: Three high severity holes in Microsoft Officecsoonline.com

    Information disclosure vulnerabilities are especially dangerous in corporate environments where Excel files often contain financial data, intellectual property, or operational records.

  3. [3]
    Microsoft Patch Tuesday – March 2026lansweeper.com

    CVE-2026-26144, a critical XSS bug in Excel with CVSS 7.5, could enable zero-click information disclosure via Copilot Agent mode.

  4. [4]
    March 2026 Patch Tuesday: Updates and Analysiscrowdstrike.com

    Eight Critical vulnerabilities patched among 82 CVEs, with elevation of privilege accounting for 56% of all patches.

  5. [5]
    Microsoft March 2026 Patch Tuesday fixes 2 zero-days, 79 flawsbleepingcomputer.com

    Microsoft's March 2026 Patch Tuesday includes 79 flaws and 2 publicly disclosed zero-days, none actively exploited.

  6. [6]
    Microsoft patches 80+ vulnerabilities, six flagged as more likely to be exploitedhelpnetsecurity.com

    Six vulnerabilities were flagged as more likely to be exploited in the March 2026 Patch Tuesday.

  7. [7]
    Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interactionthehackernews.com

    EchoLeak (CVE-2025-32711) is the first known zero-click attack on an AI agent, with a CVSS score of 9.3.

  8. [8]
    EchoLeak AI Attack Enabled Theft of Sensitive Data via Microsoft 365 Copilotsecurityweek.com

    Potentially exposed data included chat logs, OneDrive files, SharePoint content, Teams messages, and organizational data.

  9. [9]
    EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM Systemarxiv.org

    EchoLeak represents the first known case of a prompt injection being weaponized to cause concrete data exfiltration in a production AI system.

  10. [10]
    CVE-2025-32711 Vulnerability: EchoLeak Flaw in Microsoft 365 Copilotsocprime.com

    EchoLeak targets Copilot's prompt parsing behavior, exploiting LLM Scope Violation to manipulate the AI model.

  11. [11]
    Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Datavaronis.com

    Varonis Threat Labs uncovered Reprompt, an invisible entry point to perform data exfiltration that bypasses enterprise security controls from one click.

  12. [12]
    Reprompt attack hijacked Microsoft Copilot sessions for data theftbleepingcomputer.com

    The Reprompt attack maintained persistent control of Copilot sessions even after the chat window was closed.

  13. [13]
    GeminiJack: The Google Gemini Zero-Click Vulnerability Leaked Gmail, Calendar and Docs Datanoma.security

    A single poisoned document could exfiltrate years of email, complete calendar histories, and entire document repositories.

  14. [14]
    New GeminiJack 0-Click Flaw in Gemini AI Exposed Users to Data Leakshackread.com

    Google separated Vertex AI Search from Gemini Enterprise and restructured its RAG architecture to address the flaw.

  15. [15]
    Microsoft Copilot Adoption Statistics & Trends (2026)stackmatix.com

    Microsoft reported 15 million paid Copilot seats with a 3.3% conversion rate across 450 million commercial users.

  16. [16]
    What Microsoft 365 Copilot Adoption Really Looks Likelighthouseglobal.com

    Seventy percent of Fortune 500 companies have adopted Microsoft 365 Copilot, though most remain in pilot phases.

  17. [17]
    Ignite 2024: Why nearly 70% of the Fortune 500 now use Microsoft 365 Copilotnews.microsoft.com

    More than 90% of Fortune 500 companies are now using Microsoft 365 Copilot in some capacity.

  18. [18]
    Microsoft 365 Co-pilot Security Risks: Complete Enterprise Safety Guidemetomic.io

    Nearly half of IT leaders say they lack confidence in their ability to manage Copilot's security and access risks.

  19. [19]
    Detecting and mitigating common agent misconfigurationsmicrosoft.com

    Microsoft published Top 10 Risks framework for Copilot Studio agents, including hard-coded credentials and uncontrolled data access.

  20. [20]
    How Microsoft defends against indirect prompt injection attacksmicrosoft.com

    Microsoft outlines defense-in-depth strategy including prompt filtering, content separation, memory controls, and continuous monitoring.

  21. [21]
    Copilot Control System Security and Governancelearn.microsoft.com

    Microsoft provides Copilot Control System for administrators to manage agent permissions and apply data loss prevention policies.

  22. [22]
    Microsoft debuts Copilot Cowork built with Anthropic's helpfortune.com

    Microsoft launches Copilot Cowork, a more autonomous agent framework, expanding AI capabilities across its product suite.

  23. [23]
    AI Agents Access Everything, Fall to Zero-Click Exploitdarkreading.com

    As AI agents gain broader access to corporate data, the blast radius of a single vulnerability expands exponentially.