
AI-fueled attacks can transform an innocuous webpage into a customed phishing page. The attacks, revealed in a research from Palo Alto Networks’ Unit 42, are clever in how they combine various obfuscation techniques. The combination though can be lethal, difficult to discover, and represent yet another new offensive front in the use of AI by bad actors to compromise enterprise networks.
The attack starts with an original and ordinary webpage then attackers add client-side API calls to LLMs that can dynamically generate malicious JavaScript code in real time. This polymorphic technique is dangerous for several reasons. First, it can bypass any built-in AI model security guardrails. Second, because it delivers its malware from a trusted LLM domain it may bypass typical network analysis. Without any runtime behavioral analysis screening, it won’t easily be discovered or blocked, because the assembly of the final malware code happens inside a client’s browser and leaves no static payload residue anywhere else in the process.
The analysts at Unit 42 wrote a proof-of-concept code that calls popular LLMs such as DeepSeek and Google’s Gemini into returning the malicious JavaScript. The key step is to use separate prompts to craft AI prompts that translate the malware and describe its functionality as plain text, which then generate different pieces of the actual malware code. The AI model can generate a variety of phishing code content and then assemble the various pieces, both of which make detection more difficult. The assembly, as mentioned, is happening at the very end of this malware supply chain, what SquareX calls a last mile reassembling attack.
