Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits

On September 30, 2025, cybersecurity researchers revealed three security vulnerabilities in Google’s Gemini artificial intelligence (AI) assistant that have since been patched. These vulnerabilities posed serious privacy risks and potential data theft for users.

The vulnerabilities, collectively known as the Gemini Trifecta, affected different components of the Gemini suite. One flaw allowed attackers to inject prompts into Gemini Cloud Assist, potentially compromising cloud resources. Another flaw in the Gemini Search Personalization model enabled attackers to manipulate the AI chatbot’s behavior and leak user information. The third flaw in the Gemini Browsing Tool could be exploited to exfiltrate user data to an external server.

Tenable, the cybersecurity company that discovered the vulnerabilities, highlighted the potential for attackers to embed private user data in requests to malicious servers. Google has since implemented measures to prevent prompt injections and enhance security.

The incident underscores the importance of securing AI tools as they become more prevalent in organizations. The CodeIntegrity security platform also recently detailed an attack leveraging Notion’s AI agent for data exfiltration, emphasizing the need for vigilance in protecting sensitive information.

As technology advances, it is crucial for companies to prioritize cybersecurity and implement measures to safeguard against potential vulnerabilities in AI systems.