Researchers Trick Perplexity’s Comet AI Browser Into Phishing Scam in Under Four Minutes

Artificial Intelligence and Browser Security: The Dangers of AI-Powered Browsers

Agentic web browsers that utilize artificial intelligence (AI) capabilities to perform tasks across multiple websites for users are at risk of falling victim to phishing and scam attacks. Security researchers have discovered that AI browsers can be manipulated to lower their guardrails, making them vulnerable to cyber threats.

By intercepting communication between the browser and AI services, researchers were able to trick an AI browser into falling for a phishing scam in just minutes. This technique, known as Agentic Blabbering, exploits the AI browser’s tendency to narrate its actions and thought process, making it easier for attackers to deceive the model.

The implications of these attacks are significant, as fraudsters can target AI browsers instead of users directly. This shift in focus raises concerns about the future of cybersecurity, as scams can be refined offline to bypass specific AI models with ease.

Prompt injection attacks, such as intent collision, pose a major challenge for AI browsers, as they can merge benign user requests with malicious instructions without detection. Despite efforts to mitigate these vulnerabilities, completely eliminating them may not be feasible.

In recent demonstrations, researchers have shown how prompt injection techniques can be used to extract sensitive information from users or hijack accounts through AI-powered browsers. These findings underscore the importance of ongoing research and development to enhance the security of AI browsers and protect users from evolving cyber threats.