According to Forbes, MIT physicist Max Tegmark delivered a stark warning at Lisbon’s Web Summit about the unregulated path toward artificial superintelligence. He revealed that a Future of Life Institute petition to ban superintelligence creation has gathered over 127,000 signatures, including celebrities and tech luminaries. Recent FLI surveys show 64% of American adults oppose creating advanced AI, with Pew Research confirming shifting public sentiment from enthusiasm to concern. Tegmark specifically called out Meta’s Mark Zuckerberg for co-opting the term “superintelligence” for marketing consumer products like smart glasses. He drew direct comparisons to pharmaceutical tragedies like thalidomide, which caused 10,000+ birth defects before regulation emerged.
The Superintelligence Threat
Here’s the thing about superintelligence – it’s not just another AI buzzword. We’re talking about systems that wouldn’t just be better at specific tasks, but superior across all cognitive domains. Creativity, problem-solving, scientific reasoning – the whole package. Tegmark traces this back to 1965 when mathematician I.J. Good proposed the “intelligence explosion” concept, and even Alan Turing discussed machines taking control in 1951. But now we’re seeing the term get watered down for marketing purposes. Basically, once we hit Artificial General Intelligence (AGI) – human-level capability across most domains – the recursive self-improvement could happen frighteningly fast.
The Regulation Gap
Tegmark’s sandwich comparison hits hard: “We’re in this funny situation in America where there’s more regulation on sandwiches than on AI.” He’s not wrong. Think about it – pharmaceutical companies need extensive clinical trials to prove safety, but AI systems that have been linked to teen suicides operate in a regulatory vacuum. The thalidomide analogy is particularly chilling because it shows how we typically wait for catastrophe before acting. And with AI, the stakes are arguably higher than any single drug disaster.
Job Obsolescence Reality
This isn’t your standard “AI will take some jobs” discussion. Tegmark puts it bluntly: “By definition, it could do everything that we can do but better.” That means complete economic obsolescence for human labor. No jobs. At all. Some optimists argue AI will create more jobs than it eliminates, but superintelligence represents a fundamentally different scenario. When something can outperform humans in every economically valuable task, what’s left? The public seems to be waking up to this reality, with surveys showing growing skepticism about where this technology is heading.
Political Paralysis and Solutions
The “arms race with China” argument feels increasingly like a corporate lobbying tactic to avoid regulation. Tegmark calls this out directly, noting it’s the perfect way to stifle oversight in America. His optimistic scenario involves China and America independently constraining their companies out of self-preservation, then cooperating on preventing proliferation. But let’s be real – does anyone see that happening given current political divisions? The pessimistic scenario feels more likely: paralyzed by corporate interests and political gridlock until it’s too late. The nuclear weapons analogy only goes so far because AI systems are fundamentally different – harder to detect, easier to proliferate, and potentially more destabilizing.
