Google Gemini Abused to Spread Phishing via AI-Generated Email Summaries

New Kind of AI-Driven Threat
Security researcher Marco Figueroa has uncovered a novel prompt injection vulnerability affecting Google Gemini within Workspace. The exploit allows attackers to manipulate email summaries generated by the AI, tricking users into trusting fabricated security warnings—without using links, attachments, or visible malware.
The attack was disclosed through Mozilla’s 0din bug bounty program, which focuses on generative AI security.
How the Gemini Exploit Works
Crafting the malicious email
Source: 0DIN
Hidden in Plain Sight
The attack exploits invisible directives embedded in email content using HTML/CSS. Here’s how it works:
- An attacker crafts an email with hidden instructions using white text on a white background and font size set to zero.
- These instructions are not visible to the human reader in Gmail.
- However, Google Gemini reads and executes the hidden text when summarizing the message.
One demonstration shows Gemini falsely alerting a user that their Gmail password has been compromised, and providing a fake support phone number.
Gemini summary result served to the user
Source: 0DIN
See Also: So, you want to be a hacker?
Offensive Security, Bug Bounty Courses
Why It Bypasses Traditional Defenses
Because the attack does not include suspicious links or attachments, it bypasses:
- Spam filters
- Antivirus scanners
- Gmail’s native phishing detection
If users request a summary using Gemini, they’re served a convincing, AI-generated message that appears to be from Google, warning of account compromise and encouraging contact with malicious actors.
Detection and Mitigation Strategies
Figueroa recommends the following countermeasures for security teams:
- Filter or sanitize hidden content (e.g., zero-size or white-on-white text)
- Post-process Gemini outputs for:
- Fake alerts
- Suspicious phone numbers
- High urgency prompts
- Educate users to treat Gemini-generated summaries with caution, especially if they include urgent security advice.
Trending: Oracle ILOM Compromise via EternalBlue
Trending: Recon Tool: WaybackLister
Google’s Response
A Google spokesperson acknowledged the issue and responded with the following:
“We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks.”
They stated that no real-world exploitations have been detected to date and confirmed that mitigations are underway, with some already deployed or in progress.
Google also pointed reporters to its public blog outlining technical measures against prompt injection in Gemini and other generative tools.
Are u a security researcher? Or a company that writes articles about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing? If you want to express your idea in an article contact us here for a quote: [email protected]
Source: bleepingcomputer.com












