977
Anthropic’s Claude AI presently rules the realm of vibe coding. However, the company has unveiled how Claude’s kingdom possibly expands beyond just vibe coding to include ‘vibe hacking’. In a recent report, Anthropic shared details about various instances where threat actors exploited Claude AI to develop ransomware and conduct other malicious activities.
Threat Actors Exploit Claude AI For Malicious Activities, Including Ransomware Development
According to Anthropic’s Threat Intelligence Report: August 2025, the company has detected misuse of Claude AI for conducting various malicious activities, including ransomware operations.
While Claude AI has gained popularity among programmers as an efficient tool for “vibe coding,” its potential has also attracted threat actors. Coining the phenomenon “vibe hacking,” Anthropic revealed details about a range of malicious operations, from data extortion to ransomware development, all using Claude AI.
Specifically, the firm detected and disrupted three different malicious operations exploiting Claude AI. These include:
1. Data extortion campaign:
The first malicious activity that Anthropic quoted as a misuse of Claude AI is a sophisticated data extortion campaign. The threat actors, identified as GTG-2002, used Claude AI to automate reconnaissance, credential harvesting, and network penetration on target networks. The attackers even relied on the AI’s intelligence to decide which type of data to exfiltrate and the best method to do so. As stated in the report,
Claude not only performed “on-keyboard” operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process.
Using this strategy, the threat actors targeted 17 different organizations across various sectors. They even demanded huge ransoms from the victims, exceeding $500,000 in some cases, threatening to publicly release the stolen data if victims did not comply.
2. Remote worker fraud:
The second malicious activity involved a remote worker scam. This fraudulent campaign was linked to North Korean threat actors, who posed as remote workers to target various Fortune 500 companies. The attackers even created false identities with convincing background details to support their claimed technical expertise for the jobs.
3. Ransomware-as-a-service (RaaS):
The most serious exploitation of Claude AI includes the development of ransomware-as-a-service (RaaS) models. Linked to a UK-based threat actor group, GTG-5004, this operation used Claude AI for almost every step, from the development and marketing to the distribution of ransomware, all without manual coding. The threat actors developed multiple ransomware variants employing ChaCha20 encryption, anti-EDR techniques, and Windows exploitation. Despite no apparent coding knowledge, the threat actors were able to develop and sell the AI-generated ransomware on the dark web.
Upon detecting these activities, Anthropic banned the accounts involved in these operations. They also enhanced the security measures to swiftly detect and prevent such malicious activities in the future. Yet, through this report, Anthropic sheds light on the critical need for ethical and secure use of AI as technology continues to evolve.
Let us know your thoughts in the comments.