
The malware fetches command payloads embedded in “Assistants” descriptions (which can be set to values like “SLEEP”, “Payload”, “Result”), then decrypts, decompresses, and executes them locally. After execution, the results are uploaded back via the same API, much like the “living off the land” attack model, but in an AI cloud context.
Because the attacker uses a legitimate cloud service for command-and-control, detection becomes harder, researchers noted. There’s no C2 domain, only benign-looking traffic to api.openai.com.
Lessons for defenders and platform providers
Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly harder.
