
The concept is by no means new. Before AI, attackers used encryption, packing, junk code insertion, instruction reordering, and mutation engines to generate millions of variants from a single malware family. Modern endpoint platforms rely more on behavioral analysis than static signatures.
In practice, most so-called AI-driven polymorphism amounts to swapping a deterministic mutation engine for a probabilistic one powered by a large language model. In theory, this could introduce more variability. Realistically, though, it offers no clear advantage over existing techniques.
Marcus Hutchins, malware analyst and threat intelligence researcher, calls AI polymorphic malware “a really fun novelty research project,” but not something that offers attackers a decisive advantage. He notes that non-AI techniques are predictable, cheap, and reliable, whereas AI-based approaches require local models or third-party API access and can introduce operational risk. Hutchins also pointed to examples like Google’s “Thinking Robot” malware snippet, which queried the Gemini AI engine to generate code to evade antivirus. In reality, the snippet merely prompted AI to produce a small code fragment with no defined function or guarantee of working in an actual malware chain.
