OpenAI’s API Gets Weaponized in Sneaky ‘SesameOp’ Backdoor

OpenAI's API Gets Weaponized in Sneaky 'SesameOp' Backdoor - Professional coverage

According to Infosecurity Magazine, threat actors are weaponizing the OpenAI Assistants API to deploy a backdoor called SesameOp that manages compromised devices remotely. Microsoft’s Detection and Response Team discovered this sophisticated attack in July 2025 while responding to a security incident where attackers had maintained presence for several months. The backdoor uses a heavily obfuscated DLL file called Netapi64.dll loaded via .NET AppDomainManager injection and leverages OpenAI’s infrastructure for command-and-control communications. Instead of using traditional methods, SesameOp fetches commands through the Assistants API, executes them locally, then sends results back as encrypted messages. Microsoft published their findings about this covert operation on November 3, noting that OpenAI plans to deprecate the Assistants API in August 2026 in favor of the Responses API.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

Why This Changes Everything

Here’s the thing about SesameOp – it’s not your typical malware. We’re talking about attackers using completely legitimate, trusted infrastructure for their dirty work. OpenAI’s API? That’s the kind of traffic that would normally sail right past security controls. Most organizations aren’t blocking access to major AI platforms, and threat actors know it.

The sophistication here is pretty wild. They’re using .NET AppDomainManager injection to load their malicious DLL, which means they’re exploiting legitimate Windows mechanisms rather than relying on obvious exploits. And the encryption layering? Both symmetric and asymmetric crypto to hide their tracks. This isn’t some script kiddie operation – we’re dealing with professionals who understand exactly how to stay hidden.

The Business Implications

So what does this mean for companies relying on AI services? Basically, we’re entering a new era where the very tools that power innovation become potential attack vectors. OpenAI’s API infrastructure becomes the perfect cover – it’s trusted, it’s widely used, and monitoring for malicious activity means potentially disrupting legitimate business operations.

Think about the timing too. OpenAI knows they need to phase out the Assistants API by August 2026, but that’s still plenty of time for attackers to exploit this channel. And who’s to say the replacement Responses API won’t have its own vulnerabilities? The cat-and-mouse game just moved to a whole new playground.

What Comes Next

Look, this discovery by Microsoft DART is probably just the tip of the iceberg. If one group figured out how to weaponize AI APIs, others definitely have too. The question isn’t whether we’ll see more of these attacks – it’s how many are already happening that we haven’t detected yet.

Companies need to rethink their security posture around cloud and AI services. Traditional network monitoring that focuses on suspicious domains and IP addresses? That’s becoming less effective when attackers can just use major platforms like OpenAI. We’re going to need behavioral analysis, API monitoring, and way more sophisticated detection that can spot abnormal patterns in what looks like legitimate traffic.

And honestly? This is why Microsoft’s timing in publishing this research matters. They’re giving the security community a heads-up about a technique that’s likely to become more common. The race is on to develop defenses before this becomes the new normal for sophisticated attacks.

Leave a Reply

Your email address will not be published. Required fields are marked *