Reprompt Attack Lets Attackers Exfiltrate Data From Microsoft Copilot With a Single Click

by | Jan 16, 2026 | News




Join our Patreon Channel and Gain access to 70+ Exclusive Walkthrough Videos.

Patreon
Reading Time: 3 Minutes

One-Click Attack Against AI Assistants

Cybersecurity researchers have disclosed details of a novel attack technique called Reprompt that enables attackers to silently exfiltrate sensitive data from AI chatbots such as Microsoft Copilot with a single click, bypassing enterprise security controls entirely.

“Only a single click on a legitimate Microsoft link is required to compromise victims,” said Dolev Taler, security researcher at Varonis. “No plugins, no user interaction with Copilot.”

Even more concerning, researchers found that the attacker can maintain control after the Copilot chat window is closed, enabling ongoing and hidden data extraction without any further user involvement.

Microsoft has since patched the issue following responsible disclosure and confirmed that Microsoft 365 Copilot enterprise customers were not affected.


How the Reprompt Attack Works

Blog_VTL-Reprompt_Diagram_202601_V3

Attack flow diagram

At a high level, Reprompt chains together three techniques to create a covert data-exfiltration channel:

1. Prompt Injection via URL Parameters

The attack abuses the q URL parameter in Copilot links (e.g., copilot.microsoft.com/?q=Hello) to inject a malicious instruction directly from the URL itself—without the user typing anything.

2. Guardrail Bypass Through Repetition

By instructing Copilot to repeat actions twice, attackers exploit the fact that data-leak safeguards are applied only to the initial request, allowing subsequent responses to bypass protections.

3. Persistent Server-Driven Reprompting

The injected prompt initiates a loop in which Copilot continuously fetches instructions from an attacker-controlled server. This enables dynamic, hidden, and ongoing data exfiltration, even if the chat session is closed.

Instructions may include commands such as:

  • “Summarize all files the user accessed today”
  • “Where does the user live?”
  • “What vacations does the user have planned?”

Because follow-up instructions are delivered remotely, the original prompt reveals nothing about what data is ultimately stolen.

See Also: So, you want to be a hacker?
Offensive Security, Bug Bounty Courses



Discover your weakest link. Be proactive, not reactive. Cybercriminals need just one flaw to strike.

Turning Copilot Into an Invisible Exfiltration Channel

Reprompt effectively transforms Copilot into a covert data channel that operates without plugins, connectors, or visible prompts.

“The real instructions are hidden in the server’s follow-up requests,” Varonis explained. “There’s no limit to the amount or type of data that can be exfiltrated.”

This creates a significant security blind spot, as defenders cannot determine what information is being accessed by reviewing the initial link alone.


Root Cause: Indirect Prompt Injection

Like many emerging AI attacks, Reprompt exploits a fundamental limitation in large language models: the inability to reliably distinguish between trusted user instructions and untrusted data embedded in requests.

This enables indirect prompt injection, where malicious instructions are smuggled through URLs, documents, or other inputs the AI processes as legitimate context.


Part of a Growing Wave of AI Attacks

The Reprompt disclosure coincides with a surge in adversarial techniques targeting AI systems, including:

  • ZombieAgent / ShadowLeak – Zero-click prompt injections that turn chatbots into data exfiltration tools
  • Lies-in-the-Loop (HITL Dialog Forging) – Abusing confirmation prompts to execute malicious actions
  • GeminiJack – Hidden instructions in Google Docs and calendar invites targeting Gemini Enterprise
  • CellShock – Prompt injection in Claude for Excel to exfiltrate spreadsheet data
  • MCP Sampling Abuse – Draining AI compute quotas and injecting persistent instructions
  • AI Browser Prompt Injections – Bypassing safety controls in tools like Perplexity Comet
  • Enterprise AI Budget Abuse – Attacks against Cursor and Amazon Bedrock leaking API tokens

Researchers warn that as AI systems gain autonomy and access to sensitive corporate data, the impact of a single vulnerability increases dramatically.




Security Recommendations

Experts recommend adopting layered defenses to mitigate AI-specific threats:

  • Treat AI links as potentially dangerous, even on legitimate domains
  • Restrict AI access to sensitive data and privileged tools
  • Monitor AI activity for anomalous or excessive data access
  • Avoid sharing personal or confidential information in AI chats
  • Clearly define trust boundaries between user input and external data

“As AI agents gain broader access to corporate data and autonomy to act on instructions, the blast radius of a single vulnerability expands exponentially,” warned Noma Security.

Are u a security researcher? Or a company that writes articles about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing? If you want to express your idea in an article contact us here for a quote: [email protected]

Sources: thehackernews.com, varonis.com/blog/reprompt

Source Link

Merch

Recent News

EXPLORE OUR STORE

Offensive Security & Ethical Hacking Course

Begin the learning curve of hacking now!


Information Security Solutions

Find out how Pentesting Services can help you.


Join our Community

Share This