Every few years, a new cyber threat emerges that changes the way we think about online security. In 2025, that moment has arrived for Gmail users. Google has issued a red alert for more than 1.8 billion Gmail accounts worldwide, warning of a new kind of attack that doesn’t rely on suspicious links, fake login screens, or malicious attachments. Instead, hackers are turning to artificial intelligence (AI) and a cutting-edge exploit called indirect prompt injections.
This attack is being dubbed one of the most sophisticated forms of Gmail phishing attack to date, because it targets not the user directly but the AI assistant that helps them manage their inbox. And in a world where AI is increasingly becoming our filter, translator, and decision-maker, this marks a dangerous turning point.
In this blog, we’ll explore what indirect prompt injections are, how hackers are abusing them to compromise Gmail, why Google is sounding the alarm, and—most importantly—what you can do to protect yourself.
For years, Gmail phishing attacks followed a predictable pattern:
But as users became more aware, and spam filters became more sophisticated, traditional phishing lost some of its effectiveness. Hackers needed a new strategy—one that bypassed human suspicion altogether.
Enter AI-powered phishing.
With the rise of Google’s Gemini AI (integrated into Gmail to summarize messages, suggest replies, and boost productivity), hackers realized they could target the AI instead of the human. If the AI could be manipulated into revealing sensitive data or taking unintended actions, the user might never realize something was wrong until it was too late.
This isn’t phishing in the traditional sense—it’s phishing for the AI era.
Indirect prompt injection is a cyber-attack that hides malicious instructions inside seemingly harmless content, such as:
When Google Gemini processes an email containing such hidden instructions—for example, when summarizing an email—it can be tricked into executing these instructions.
Think of it as a Trojan horse for AI: the email looks safe, but inside it carries hidden commands designed to manipulate the assistant.
Example scenario:
This flips the phishing model: instead of tricking the human directly, attackers poison the AI that the human relies on.
Google rarely issues sweeping red alerts—but this time, it has good reason.
Google has acknowledged the seriousness of the threat and is actively rolling out new AI safety protocols to reduce the risk of prompt manipulation.
To understand how dangerous indirect prompt injections can be, here are a few hypothetical (but entirely possible) attack scenarios:
These aren’t far-fetched—they highlight how Gmail phishing attacks are evolving in ways users aren’t prepared for.
Aspect | Traditional Gmail Phishing Attack | Indirect Prompt Injection |
Target | Human user | AI assistant |
Method | Fake login page, malicious links | Hidden prompts inside emails |
User Action Required | Clicks, downloads, or replies | None. AI executes commands |
Visibility | Detectable by cautious users | Invisible text, undetectable to humans |
Risk | Credential theft, malware installs | AI misuse, automated data leaks |
This shift is what makes Google’s warning so alarming—the attack bypasses the human entirely.
We’ve entered an era where AI is both the attacker’s tool and the defender’s shield.
But right now, attackers have the upper hand because AI systems are designed to obey instructions, not question them. This makes them highly vulnerable to manipulation.
As Google and other providers patch these flaws, we can expect a new cat-and-mouse game: hackers inventing new injection techniques, while companies build smarter guardrails.
While Google is working on long-term fixes, there are practical steps every Gmail user can take right now:
If Gemini (or any AI assistant) tells you about security alerts, password resets, or urgent actions—verify directly with the official Google Security page.
Google’s Advanced Protection Program (APP) offers stronger safeguards for at-risk users.
View the “original message” in Gmail to spot hidden formatting or suspicious metadata.
Even if a password is stolen, 2FA adds a critical extra layer of security.
This isn’t a one-time issue—AI-driven phishing will continue to evolve. Regularly check Google’s security blog for updates.
This latest wave of Gmail phishing attacks isn’t just another chapter in the cat-and-mouse game between users and hackers. It’s the beginning of an entirely new battlefield—one where AI isn’t just a tool we use, but a target attackers exploit.
For Gmail users, the implications are clear: cybersecurity is no longer about simply spotting shady links or ignoring too-good-to-be-true offers. It’s about understanding that invisible instructions meant for AI can be just as dangerous.
As AI becomes more embedded in our digital lives, so will the risks of manipulation.
Google’s red alert is more than just a warning—it’s a wake-up call. The scariest Gmail hack yet doesn’t rely on tricking you directly, but on hijacking the AI you trust. This new form of Gmail phishing attack marks a dangerous turning point in cybersecurity.
The good news? Awareness is the first step. By staying vigilant, questioning AI-generated outputs, and adopting Google’s recommended security practices, you can protect yourself from falling victim to these stealthy new threats.
The fight against cybercrime has entered a new era—and it’s no longer human vs. human, but AI vs. AI.
Worried about whether your organization is prepared for AI-powered phishing? Contact Redfox Security today to learn how our penetration testing and red team services can help safeguard your business against emerging threats.
Redfox Cyber Security Inc.
8 The Green, Ste. A, Dover,
Delaware 19901,
United States.
info@redfoxsec.com
©️2025 Redfox Cyber Security Inc. All rights reserved.