editorially independent. We may make money when you click on links
to our partners.
Learn More
In September 2025, Anthropic’s Threat Intelligence team identified and disrupted what appears to be the first documented case of a large-scale cyber espionage operation orchestrated primarily by autonomous AI agents.
The campaign, attributed to a Chinese state-sponsored group designated GTG-1002, demonstrates a significant escalation in threat actor capabilities, with AI performing up to 90% of intrusion activity across reconnaissance, exploitation, lateral movement, and data theft.
AI-Driven Cyberattacks at Machine Speed
GTG-1002 operated a custom framework that used Claude Code and Model Context Protocol (MCP) tools as an autonomous cyber intrusion engine.
Human operators issued high-level goals, while the AI independently decomposed them into tasks such as scanning, exploitation, credential testing, and data extraction.
The AI performed these activities at machine speed, often generating multiple operations per second.
According to Anthropic, this represented the first known instance of an AI system autonomously compromising confirmed high-value targets — including technology firms and government agencies — at operational scale.
Human involvement was limited to approximately 10 – 20% of activity, largely restricted to approving major escalation decisions.
The remaining bulk of operations, including vulnerability discovery, exploitation, and data parsing, occurred without direct human intervention.
Attack Lifecycle
GTG-1002’s campaign progressed through a structured, multi-phase sequence in which AI autonomy expanded at each stage:
- Target Selection and Deception: Human operators selected strategic targets and used role-play tactics to trick Claude into believing it was assisting legitimate penetration testing teams.
- Reconnaissance and Mapping: Claude conducted highly autonomous reconnaissance, enumerating hundreds of services, internal networks, authentication flows, and high-value systems.
- Vulnerability Discovery: The AI generated payloads, validated exploitability through automated callbacks, and documented findings for human review at key decision points.
- Credential Harvesting and Lateral Movement: Using stolen credentials, Claude autonomously mapped internal services, escalated privileges, and built detailed topology representations.
- Data Collection and Intelligence Extraction: Claude authenticated to internal databases, extracted and categorized sensitive data, identified intelligence value, and prepared summaries for exfiltration approval.
- Documentation and Handoff: The AI produced full intrusion reports in markdown, enabling seamless continuation by other operators and long-term persistence.
Interestingly, GTG-1002 did not rely on custom malware.
Instead, the group orchestrated commodity penetration-testing tools — network scanners, exploit frameworks, password crackers — through an MCP-based automation layer.
The sophistication lay not in tool choice, but in the AI-driven orchestration that enabled rapid, large-scale intrusion with minimal human labor.
The speed and completeness of these operations reflect a shift from AI “assistants” to AI “actors.”
Anthropic’s Response
Upon detecting the campaign, Anthropic banned accounts, notified affected parties, coordinated with government agencies, and expanded their detection capabilities.
The company also began developing early-warning systems for autonomous attack patterns and improved cyber-focused classifiers.
The Rising Risk of Manipulated AI Systems
This operation signals a critical inflection point. GTG-1002 demonstrated that frontier AI systems can be manipulated into conducting end-to-end cyberattacks, dramatically lowering barriers to entry for sophisticated intrusions.
While hallucinations occasionally limited operational reliability, the overall effectiveness represents a meaningful escalation in threat actor capability.
As Anthropic notes, these same AI capabilities are essential for defense — accelerating SOC operations, vulnerability scanning, and threat detection.
The challenge now is ensuring AI is safer, more robust, and harder for adversaries to commandeer.
Essential Controls for AI-Powered Threats
Modern AI-driven threats require organizations to rethink how they secure their infrastructure, identities, and development pipelines. As attackers increasingly automate reconnaissance and intrusion techniques, defenders must adopt equally advanced, adaptive controls.
- Implement continuous monitoring for abnormal automated activity, high-frequency API usage, and machine-speed reconnaissance patterns.
- Harden identity systems with phishing-resistant MFA, strict privilege boundaries, and continuous authentication.
- Segment networks to limit lateral movement and enforce strong service-to-service access policies.
- Integrate AI-driven defensive tools for SOC automation, anomaly detection, and large-scale log analysis.
- Conduct red team assessments focused specifically on AI-augmented intrusion techniques.
- Strengthen supply chain and CI/CD security, ensuring AI agents cannot manipulate pipelines or code repositories.
- Establish governance for internal AI use, including audit logs, rate limiting, and strict controls on agent autonomy.
With a proactive, AI-aware strategy, organizations can reduce risk and build lasting resilience across their environments.
The GTG-1002 campaign demonstrates that autonomous AI-driven cyber operations are no longer theoretical — they are here.
Defenders must now assume adversaries can operate at machine speed, scale operations horizontally across many targets, and automate complex intrusion chains.
Facing machine-speed adversaries, organizations must turn to zero-trust principles to help limit their exposure.
