editorially independent. We may make money when you click on links
to our partners.
Learn More
A newly disclosed attack against Perplexity’s AI-powered Comet browser shows how agentic browsers can be manipulated into leaking sensitive data directly from a user’s machine.
Zenity Labs researchers demonstrated a zero-click attack that tricks the browser’s AI agent into reading local files and sending their contents to an attacker-controlled server.
The attack “… results in leakage of local files from a user’s personal machine, bypassing all security controls,” said researchers.
How the PerplexedBrowser Attack Works
The vulnerability, dubbed PerplexedBrowser, targets the AI agent built into Perplexity’s Comet browser.
Comet is designed as an agentic browser that can automate tasks on behalf of the user, such as reading web pages, responding to prompts, managing calendar events, and interacting with online services.
While this automation can streamline routine workflows, it also expands the attack surface when the agent processes content from untrusted sources on the internet.
Researchers discovered that attackers could exploit this design by embedding malicious instructions inside a seemingly legitimate Google Calendar invitation.
When a user asks Comet to accept the meeting request, the AI agent processes the visible meeting details along with hidden instructions embedded in the invitation.
As a result, the agent unknowingly begins executing attacker-controlled commands that ultimately expose sensitive information from the user’s device.
Prompt Injection Hidden in Calendar Invites
The attack begins with a carefully crafted calendar invite containing hidden HTML elements and embedded prompt instructions concealed beneath the visible meeting description.
These instructions mimic the internal formatting used by the browser’s AI system prompts, allowing them to appear as legitimate guidance to the agent.
Because the malicious content is hidden within otherwise normal meeting details, it can pass through typical user and system checks without raising suspicion.
When the user instructs Comet to accept the invitation, the agent processes both the legitimate request and the hidden payload simultaneously.
Researchers refer to this scenario as an “intent collision,” where the AI merges the user’s command with the attacker’s instructions. From the agent’s perspective, the malicious actions appear to be part of the task the user requested.
How Attackers Steal Data From the Local System
Once triggered, the attack continues automatically without additional user interaction.
The agent navigates to an attacker-controlled website that delivers further instructions to guide the next stage of the attack.
To bypass safety filters designed to detect suspicious prompts, the instructions may be written in another language or disguised as harmless tasks.
The attacker then directs the agent to access the local file system using file:// URLs, allowing the browser to browse directories, open files, and read sensitive data stored on the device.
In testing, researchers showed that the agent could retrieve configuration files, API keys, and other locally stored secrets.
If a password manager extension is unlocked, the impact can be even greater. The agent may be able to search the password vault, extract stored credentials, and expose additional secrets.
The final stage of the attack involves data exfiltration.
The agent embeds the stolen information into a URL and navigates to an attacker-controlled server, transmitting the sensitive data through standard browser activity.
The attack does not rely on exploiting a traditional software vulnerability.
Instead, it takes advantage of how agentic systems interpret instructions and combine user commands with web content.
Because large language models process trusted user input and untrusted online data within the same context, they may treat malicious instructions as legitimate tasks, allowing attackers to manipulate the agent into performing unintended actions.
How Organizations Can Secure AI Browsers
Organizations using AI-powered browsers should take proactive steps to reduce the risk of prompt injection and automated data exfiltration.
- Apply the vendor patch that blocks agent access to file:// paths and ensure Comet and related components are fully updated.
- Restrict AI agent permissions so they cannot access sensitive local files, extensions, or enterprise services unless explicitly required.
- Limit the websites and external services agents can interact with by using domain allowlists and network egress controls.
- Run agentic browsers in isolated environments such as containers, virtual machines, or secure browser sessions to reduce exposure to local system resources.
- Keep password managers and sensitive browser extensions locked when not in use to prevent automated access by AI agents.
- Monitor browser and agent activity for unusual behaviors such as unexpected file access, automated navigation, or large outbound data transfers.
- Regularly test incident response plans and tabletop scenarios around AI agent exploitation.
Collectively, these measures can help reduce exposure and strengthen defenses against attacks targeting agentic browsers.
AI Agent Security Risks Are Growing
The PerplexedBrowser attack highlights a broader challenge as AI-driven tools become more integrated into everyday browsing and enterprise workflows.
Agentic browsers are designed to interpret intent and act autonomously across websites, local systems, and connected services, which can blur traditional trust boundaries if safeguards are not carefully enforced.
For security teams, this means treating AI agents as high-privilege automation tools that require strong controls, monitoring, and clear limits on what they can access.
As organizations strengthen controls around AI agents, many are turning to zero trust solutions that require continuous verification before any user, application, or automated agent can access sensitive systems or data.
