
As for the impact of AI, X-Force reports the technology is no longer an emerging concept in cybersecurity: “It’s a force multiplier actively used by both defenders and adversaries. Threat actors are already applying generative AI to scale phishing operations, accelerate malicious code development and enhance social engineering through improved language quality and realism. At the same time, defenders are using AI-driven analytics to process vast volumes of telemetry, identify anomalous behavior and shorten detection and response timelines.”
“Adversaries increasingly use AI to accelerate research, analyze large data sets and iterate on attack paths in real time, allowing them to adjust tactics as conditions change rather than relying on static, preplanned actions,” the X-Force report states. “This operational flexibility increases dwell-time risk and places greater strain on security teams that depend on fixed rules, signatures or delayed analysis to detect malicious activity.”
As multimodal AI models mature, X-Force states that it expects adversaries to automate complex tasks like reconnaissance and advanced ransomware attacks, driving faster-moving, more adaptive threats.
Some other pertinent findings include:
- X-Force identified a nearly 4x increase in large supply chain or third-party compromises since 2020, mainly driven by attackers exploiting trust relationships and CI/CD automation across development workflows and SaaS integrations. With AI-powered coding tools accelerating software creation, and occasionally introducing unvetted code, the pressure on pipelines and open‑source ecosystems is expected to grow in 2026.
- Active ransomware and extortion groups surged (49%) year over year, marking ecosystem fragmentation, while publicly disclosed victim counts rose roughly 12%.
- Vulnerability exploitation became the leading cause of attacks, accounting for 40% of incidents observed by X-Force in 2025.
- Compromised chatbot credentials create AI-specific risks beyond simple account access. Attackers can manipulate outputs, exfiltrate sensitive data or inject malicious prompts.
- Attackers are using AI to speed research, analyze large data sets and iterate on attack paths in real time.
- Agentic AI has introduced new risks, and amplified others. Security leaders need a comprehensive AI governance solution to scale AI with trust and transparency.
“Protecting identities has always posed a challenge. It’s about to get harder. As attackers fine-tune their credential‑driven operations, IT and security leaders must turn to AI to help them gain visibility into identity-based risks and threats across their IT landscape,” the X-Force report states. “By combining AI-powered identity threat detection and response (ITDR) and identity security posture management (ISPM) services and solutions, organizations can move more quickly and efficiently to identify vulnerabilities and prevent attacks from happening.”
