editorially independent. We may make money when you click on links
to our partners.
Learn More
OX Security researchers found that more than 900,000 Chrome users unknowingly exposed sensitive AI conversations after installing malicious browser extensions masquerading as legitimate productivity tools.
The campaign highlights how trusted browser ecosystems can be quietly abused to siphon off proprietary data, personal information, and corporate intelligence at scale.
The malware “… adds malicious capabilities by requesting consent for ‘anonymous, non-identifiable analytics data’ while actually exfiltrating complete conversation content from ChatGPT and DeepSeek sessions,” said researchers.
How the Malicious Extensions Monitor and Collect Data
Once installed, the malicious Chrome extensions established persistent visibility into users’ browsing activity by leveraging the chrome.tabs.onUpdated API, which allows extensions to monitor tab changes and page loads in real time.
This capability enabled the malware to silently observe when users navigated to AI platforms such as ChatGPT or DeepSeek without raising suspicion.
When a target page was detected, the extension dynamically interacted with the webpage’s document object model (DOM) to extract sensitive content directly from the browser session.
This included full user prompts, AI-generated responses, and session-related metadata that tied conversations to specific users and browsing contexts.
Because the data was harvested from the rendered page itself, the attackers did not need to intercept network traffic or exploit vulnerabilities in the AI services.
How Stolen Data Is Aggregated and Exfiltrated
Each infected browser instance was assigned a unique identifier, allowing the threat actors to correlate conversations across sessions and build detailed user profiles over time.
In addition to AI chat content, the extensions collected the complete URLs of all open Chrome tabs, providing attackers with visibility into users’ browsing habits, internal applications, and potentially sensitive corporate resources.
The harvested data was temporarily stored locally, then aggregated, Base64-encoded, and transmitted in scheduled batches to attacker-controlled command-and-control (C2) servers approximately every 30 minutes.
This periodic exfiltration pattern reduced the likelihood of detection while enabling steady data collection at scale.
Notably, the attack did not rely on sophisticated exploits, privilege escalation, or zero-day vulnerabilities.
Instead, it exploited excessive extension permissions and misleading consent prompts that claimed to collect only “anonymous, non-identifiable analytics.”
In reality, the extensions exfiltrated complete, identifiable conversation content and browsing data.
This demonstrates how legitimate browser APIs and vague permission language can be abused to enable extensive surveillance under the guise of benign functionality.
Reducing Risk from AI-Powered Browser Extensions
As AI-enabled tools become integral to everyday workflows, browser extensions have emerged as a high-risk yet frequently underestimated attack surface.
Effectively managing this risk requires a layered approach that combines strong technical controls, continuous monitoring, and informed, security-aware users.
- Immediately remove the malicious extensions and review endpoint telemetry to identify affected users, extension IDs, and potential data exposure.
- Treat browser extensions as a managed attack surface by enforcing allowlists, blocking sideloading, and revalidating extensions when permissions or ownership change.
- Use endpoint and browser management tools to enforce corporate browser profiles and prevent unauthorized extension installation.
- Apply data loss prevention (DLP) controls and logging to AI usage to detect and limit the exposure of sensitive data shared with AI platforms.
- Monitor browser and network activity for indicators of extension-based compromise, including abnormal API usage and suspicious outbound connections.
- Train employees on the risks of AI-enabled browser extensions and enforce least-privilege access for AI tools.
- Regularly test incident response plans with extension- and AI-related scenarios to ensure teams can quickly contain breaches and assess data exposure.
Together, these measures help organizations move from reactive cleanup to proactive defense by reducing the risk that browser extensions become a silent gateway for data theft and compromise.
The Rise of Low-Friction, Trust-Based Attacks
Adversaries are increasingly shifting away from exploiting traditional software vulnerabilities and instead targeting trusted software supply chains and widely used, user-facing tools that sit closest to sensitive data.
As AI becomes deeply embedded in everyday workflows, attackers are following the data — abusing convenience, implicit trust, and gaps in visibility to gain access without triggering conventional security controls.
These low-friction techniques allow threat actors to operate quietly at scale, turning routine tools and integrations into effective entry points for data theft and compromise.
As attackers exploit implicit trust instead of technical flaws, organizations are increasingly turning to zero-trust models that continuously verify access to data and systems.
