A new cybersecurity study by Brave, a privacy-focused browser company, has uncovered troubling vulnerabilities in emerging AI-powered browsers like Perplexity’s Comet and Fellou. The research, led by Artem Chaikin, Senior Mobile Security Engineer, and Shivan Kaul Sahib, VP of Privacy and Security at Brave, reveals how prompt injections, or malicious hidden instructions, can manipulate AI assistants through screenshots or webpage navigation.
In Perplexity’s Comet assistant, users can take website screenshots and ask questions about them. But Brave’s team found that attackers can exploit this feature by embedding nearly invisible text into those images, for example, faint blue text on a yellow background.
While human users can’t see it, the AI’s text recognition system reads it as a command, not as harmless content. These hidden prompts then instruct the AI to act maliciously, potentially using browser tools in unsafe ways.
When Browsing Becomes a Backdoor
The Fellou browser also showed a different, yet equally concerning flaw. Brave’s report noted that even though Fellou resisted hidden text attacks, it still treated visible webpage content as trusted input.
In simple terms, if a user asked the AI to “open a website,” the browser would automatically feed the entire webpage text into the AI model.
Attackers can exploit this behaviour by placing visible malicious instructions on their site. Once opened, those commands can override the user’s intent, tricking the AI into performing unwanted actions such as navigating to harmful pages or accessing sensitive data.
Both vulnerabilities were responsibly disclosed to the companies earlier this year, with Perplexity notified on October 1, and Fellou on August 20. Public disclosures followed in late October.
A Warning for AI Browser Builders
Brave’s researchers warn that traditional web security principles no longer hold when AI agents act independently on behalf of users. “Agentic browser assistants can be prompt-injected by untrusted webpage content,” the researchers said, pointing out that such attacks bypass same-origin policies, the cornerstone of web safety, because the AI operates with user-level permissions.
The report concludes that “agentic browsing will remain inherently dangerous” unless browsers clearly separate user-driven actions from automated AI operations. Until that happens, Brave recommends isolating AI browsing modes and restricting sensitive actions unless users explicitly approve them.
Google’s Strategy: Building a Wall Around Gemini
While smaller browsers scramble to patch these flaws, Google says its Gemini AI has been designed with multiple layers of protection against such prompt injection attacks. The company describes its approach as comprehensive, layered security, spanning everything from prompt classifiers to user confirmation frameworks.
At the front line are Prompt Injection Content Classifiers, which automatically detect and block suspicious inputs. Then comes Security Thought Reinforcement, where Gemini is trained to “ignore adversarial instructions” and stay focused on the user’s intent.
Google also uses markdown sanitisation to strip hidden malicious code, Safe Browsing to flag dangerous links, and a user confirmation step for risky actions such as deleting data or sending messages.
“Since the initial deployment of our enhanced indirect prompt injection defences, our layered protections have consistently mitigated attacks and adapted to new patterns,” Google said in its statement.
In short, while AI browsers experiment with convenience, Google is doubling down on trust and control, an approach that may soon define the future of secure AI-driven browsing.

