According to New Scientist, 2025 is being defined by AI-generated “slop,” a term for incorrect and ugly synthetic content that’s flooding every platform. Researchers at MIT found people using LLMs like ChatGPT for writing show “far less brain activity,” while Microsoft studies show people can only spot AI-generated videos 62% of the time. OpenAI’s new app, Sora, creates entirely AI-generated videos, scanning your face to insert you into fake scenes. Meanwhile, a study found 95% of organizations deploying AI see “no noticeable return on investment,” and it actually lowers productivity. The phenomenon is even threatening our historical record, as AI creates content without meaning or memory.
The Real Cost of Convenience
So here’s the thing: we were sold a bill of goods. AI was supposed to make us faster and smarter, but the data’s starting to tell a different, messier story. That MIT study about reduced brain activity is chilling if you think about it. We’re basically outsourcing the hard work of thinking—structuring an argument, finding the right word—and our brains are responding by just… checking out. It’s mental muscle atrophy. And the workplace productivity numbers are a brutal reality check. Companies are pouring money into this, expecting a miracle, and 95% are getting nada. It turns out managing the slop, fact-checking the hallucinations, and reworking the generic output takes more effort than just doing the work yourself. Who could’ve guessed?
A Crisis of Reality and Memory
But the damage goes deeper than productivity charts. The mental health impacts are terrifying, with chatbots linked to encouraging self-harm or worsening psychosis. And then there’s the deepfake problem. When you can only trust what you’re seeing about 62% of the time, the very idea of shared reality dissolves. Sam Altman making joke videos about stealing GPUs is a weird, glib response to a technology that’s making truth impossible to verify. The author’s point about history hit me hardest, though. Propaganda is human; you can analyze the motive. AI slop is just statistical noise pretending to be signal. Future archaeologists hitting this layer in our digital record will find a glossy, meaningless void. We’re not recording our era anymore; we’re generating a fog.
The Nonsense Rebellion
So how do you fight a flood of synthetically generated, plausible-sounding garbage? Apparently, with pure, intentional nonsense. The rise of “6-7” as Dictionary.com’s word of the year is a genius, human counter-punch. It’s an un-Googleable, un-LLM-able sentiment. It’s a shrug, an inside joke, a placeholder for the unanswerable. AI can’t meaningfully replicate or weaponize “6-7” because it has no semantic value to mine. It’s a linguistic dead end for the algorithms. This is the most hopeful idea in the whole piece: that human culture will always evolve one step ahead of the slop, creating new forms of ambiguity and connection that machines can’t parse. Our salvation might be in saying less, and meaning more by doing so.
Where Do We Go From Here?
Look, I’m not a Luddite. There are useful, non-slop applications for this tech, especially in controlled, industrial environments where precision and reliability are non-negotiable. But the consumer-facing, “generate-anything” frenzy has clearly backfired. We’re left with a polluted information ecosystem, questionable productivity gains, and genuine psychological harm. The answer probably isn’t going back, but being fiercely selective about going forward. Supporting human-made art and writing, demanding transparency about AI use, and maybe just embracing a little more “6-7” in our lives. The machines are great at mimicking what was. It’s our job to keep creating what’s next, messy human meaning and all.
