According to Fortune, Signal President Meredith Whittaker is sounding the alarm about AI agents being an “existential threat” to secure messaging apps and anyone building software for phones or computers. Speaking at the Slush technology conference in Helsinki last week, Whittaker warned that AI agents need broad access to sensitive user data like bank details and passwords to function, creating massive new attack surfaces for cybercriminals and spy agencies. She specifically highlighted how prompt injection attacks could trick AI into stealing emails, accessing accounts, or redirecting users to phishing sites. The Signal boss argued that if AI agents gain access to Signal contacts and messages through operating system integration, it would “nullify our reason for being” as a secure platform. Whittaker also criticized rival messaging apps like Meta’s WhatsApp and Facebook Messenger for pushing AI features she considers unnecessary and unwanted by users.
The security nightmare nobody’s talking about
Here’s the thing that makes this so concerning: AI agents basically need to become you to work properly. They need your bank login, your email access, your messaging history – everything. And that creates what security folks call an “attack surface” that’s absolutely massive. We’re not talking about one app being vulnerable – we’re talking about the entire operating system level becoming a potential entry point for attackers.
Prompt injection attacks are particularly scary because they’re so hard to defend against. Imagine a malicious website hiding instructions that trick your AI assistant into doing something harmful. Since AI agents can read and act on web content automatically, they could potentially steal your emails, drain your accounts, or redirect you to fake login pages without you even realizing what’s happening. It’s like giving hackers a backdoor through your most trusted digital helper.
The big tech rush that’s bypassing security
Whittaker isn’t mincing words about why this is happening. She points to the “eye-watering” infrastructure spending on AI and the pressure to justify those massive investments to shareholders. “There’s a need to continually float these valuations,” she told Fortune, leading to “reckless deployments that bypass security teams.” Basically, the financial incentives are driving companies to push AI features out the door before proper security reviews.
Meanwhile, companies like Meta are trying to frame their AI tools as safety-enhancing rather than privacy-eroding. They’re pointing to features like scam-detection in Messenger and WhatsApp and emphasizing that users have to opt-in to use these features. But is that enough? When the fundamental architecture creates new vulnerabilities, does it matter if the feature is optional?
The user demand question
Whittaker makes another interesting point: “No one wants AI in their messaging app. It’s really annoying.” She’s not wrong – how many people are actually begging for AI assistants in their chat apps? There’s some interest in practical features like translation and summarization, but most of the AI messaging features feel like solutions looking for problems.
And let’s be real – when was the last time you thought, “You know what this secure private conversation needs? An AI reading everything I type.” The consumer appetite for AI in messaging seems pretty mixed at best. Most people just want their messages to be private and secure, not “enhanced” with AI that might compromise that very security.
Broader implications for everyone
This isn’t just about Signal versus WhatsApp. Whittaker warns this threatens “the ability to develop safely at the application layer and the ability to have safe infrastructure that operates with integrity.” Translation: if operating systems bake in insecure AI agent access, it could compromise security for every app running on those platforms.
We’re at a crossroads where companies are making “very dangerous architectural decisions” in their rush to capitalize on the AI hype. The question is whether we’ll look back in five years and wonder why we traded fundamental security for what Whittaker calls “yawn-inducing conveniences.” Given how much sensitive information flows through our devices every day, that’s a trade-off worth thinking very carefully about.
