editorially independent. We may make money when you click on links
to our partners.
Learn More
A former Google engineer has been convicted in U.S. federal court for stealing thousands of confidential artificial intelligence documents.
Prosecutors said the stolen information was used to support a China-based startup and described the case as a major instance of AI-related economic espionage.
Authorities added that the material included some of Google’s most sensitive infrastructure and software designs used to power large-scale AI systems.
“This conviction reinforces the FBI’s steadfast commitment to protecting American innovation and national security,” said FBI Special Agent in Charge Sanjay Virmani in the press release.
Inside the Google AI Trade Secret Theft
Linwei Ding, also known as Leon Ding, was convicted on seven counts each of economic espionage and trade secret theft for stealing more than 2,000 confidential Google documents while employed as a software engineer.
Prosecutors said the stolen materials detailed the infrastructure Google uses to train and deploy large-scale AI models that power modern cloud services.
How the Data Was Taken
According to the U.S. Department of Justice, the theft took place over an extended period between May 2022 and April 2023.
During that time, Ding allegedly used his authorized access to Google’s internal systems to systematically transfer proprietary data from the company’s network to his personal Google Cloud account.
Trial evidence showed the documents detailed Google’s AI supercomputing architecture, including custom TPU and GPU systems and the software that orchestrates large-scale AI clusters.
The stolen trade secrets also included Google’s Cluster Management System software and custom SmartNIC hardware used for high-speed AI and cloud networking.
Prosecutors said the hardware and software designs represented core intellectual property that could give competitors a major advantage in building AI infrastructure at scale.
Efforts to Conceal the Theft
While carrying out the theft, Ding allegedly took deliberate steps to conceal his actions.
Investigators said he copied proprietary source material into Apple Notes, converted it to PDFs, and uploaded the files to his personal account.
Prosecutors also said Ding asked another employee to badge into a Google facility for him to make it appear he was working onsite while he was in China.
Ties to China-Based Companies and Talent Programs
At the same time, Ding was secretly building ties to China-based technology firms.
By mid-2022, Ding was in talks to become CTO of a PRC-based startup, and by early 2023 he had founded Shanghai Zhisuan Technologies Co., serving as its CEO.
Prosecutors said these affiliations were not disclosed to Google while Ding remained employed at the company.
The trial also revealed that Ding applied for a Shanghai-based, government-sponsored talent plan, which is designed to attract overseas researchers to contribute to China’s technological development.
In his application, Ding stated that he intended to help China develop computing infrastructure comparable to international standards.
Prosecutors said the evidence showed he intended to help Chinese government-controlled entities develop AI supercomputers and custom machine learning chips, raising national security concerns.
Managing Insider Risk in AI Environments
Protecting high-value research and intellectual property requires sustained visibility into user activity, tighter access controls, and clear accountability across teams.
- Reassess insider threat programs and enforce least-privilege access to limit exposure to high-value AI research and intellectual property.
- Strengthen monitoring for data exfiltration, including transfers to personal cloud accounts, unmanaged devices, and unsanctioned services.
- Classify and segment sensitive AI systems and repositories to restrict access based on role, project, and business necessity.
- Deploy user behavior analytics to identify anomalous activity such as unusual download volumes, off-hours access, or role-inconsistent behavior.
- Tighten controls around employee transitions, including heightened monitoring and access reviews during role changes, departures, or extended travel.
- Validate and regularly test incident response plans to ensure teams can quickly detect, investigate, and respond to insider-driven data theft.
- Improve cross-functional coordination between security, legal, and HR teams to ensure early detection and consistent handling of insider risk indicators.
Together, these measures can help organizations reduce the blast radius of insider-driven data theft while building greater resilience against future threats to critical AI research and intellectual property.
The conviction highlights the increasing role insider threats play as artificial intelligence becomes more strategically and economically significant.
As organizations expand their AI development efforts, protecting the underlying research and infrastructure requires consistent attention to insider risk alongside external security threats.
These challenges point to why organizations are turning to zero-trust solutions that limit implicit trust and continuously verify access to sensitive systems.
