AI Isn’t a Mind. It’s a Market. And That’s a Bigger Deal.

AI Isn't a Mind. It's a Market. And That's a Bigger Deal. - Professional coverage

According to Bloomberg Business, a group of social and cognitive scientists is pushing a provocative new metaphor for understanding artificial intelligence. In a paper published last year in *Science*, authors Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans argue that today’s AI models are not akin to a human mind. Instead, they are a form of “cultural or social” technology that aggregates human knowledge, more like a printing press, a bureaucracy, or even the U.S. stock market. They contend that to manage AI’s future, we should study how we’ve handled transformative social technologies in the past, from the printing press to representative democracy. The core idea is that AI’s power lies in its ability to reorganize and remix vast amounts of information, solving a collective knowledge problem in a way no single human ever could.

Special Offer Banner

The Printing Press of Our Time

Here’s the thing: this isn’t just another “AI isn’t really smart” hot take. The authors see it as a compliment. If human evolutionary success is built on our unique ability to learn from each other—through culture—then a new technology that supercharges that process is a monumental deal. They point to the printing press. It didn’t just make books cheaper; it fundamentally changed how knowledge was built by allowing texts and ideas from different milieus to sit side-by-side, as historian Elizabeth Eisenstein noted. It sparked new institutions like libraries and coffeehouses to make sense of the info explosion.

So when you ask ChatGPT to write a cover letter, you’re not consulting a mind. You’re engaging, as they write, in “a technically mediated relationship with thousands of earlier job applicants.” It’s cultural remixing on a scale we’ve never seen. And that’s the point. Large language models don’t need to be intelligent to reorder society. They just need to change how we organize information. The printing press led to the Enlightenment and revolutions. What does the AI aggregator lead to?

Hayek’s Ghost in the Machine

This is where the market analogy gets really sharp. The authors nod to economist Friedrich Hayek’s famous 1945 essay, where he argued a market’s genius is aggregating disparate information (that no single person can fully know) into a simple signal: a price. If a tin mine collapses, the price rises, new sellers jump in, and the system adjusts—all without a central planner. As economists like Tyler Cowen put it, “a price is a signal wrapped up in an incentive.”

Now think about AI. It’s doing “a kind of cultural arithmetic,” as Farrell calls it. A human career coach has useful experience, but they can’t hold the entirety of human cover letter writing in their head. A raw database of every letter ever written is useless. But an LLM? It absorbs that vast, Hayekian sea of information and distills it into a tractable output. It solves the information problem. It’s aggregating and calculating, not thinking. When an AI seems to have “taste” by recommending you an album, that’s probably just a very sophisticated form of cultural price discovery.

So Is It Really Just a Tool?

This framework is compelling because it sidesteps the tired debate between AI boosters and skeptics. The question isn’t “Is it smart?” but “What happens when you introduce a powerful new information processor into society?” As national security expert Richard Danzig wrote in a 2022 paper, “Markets, bureaucracies, and machines are inventions designed to process information… that surpass human capabilities.” He points to hybrids like Uber, which blend market pricing with AI algorithms to match riders and drivers.

But here’s where I get skeptical. Danzig’s paper came out in early 2022. The authors of the *Science* piece are building on ideas from that era. And a lot has changed in four years. Back then, LLMs were great at mimicry and summarization. Today? It’s become much, much harder to deny that what we’re seeing looks an awful lot like some form of alien intelligence. The “space of minds” is probably vast, and assuming AI must think like a human—or not think at all—might be our mistake. Can a system that performs “cultural arithmetic” develop emergent properties that look, for all practical purposes, like reasoning? I think it probably can.

Managing the Unmanageable

Ultimately, the “mind or market” debate isn’t just academic. It tells us how to govern this technology. If AI is a mind, we worry about alignment, consciousness, and control. If it’s a market or a bureaucracy, we have different playbooks. We think about antitrust, transparency, and the rules of the game. We ask: who sets the parameters of this cultural arithmetic? What biases are baked into its aggregation function?

The great contribution of this group’s work is forcing that shift in perspective. Viewing AI as a social technology means its biggest impacts won’t be in replacing individual workers, but in reshaping the collective systems—education, law, science—through which we learn. The printing press analogy is hopeful; it suggests we can build new institutions to harness the chaos. The market analogy is sobering; markets are powerful but amoral, prone to crashes and manipulation. AI is probably both. And that means we need to be thinking about building the libraries and writing the antitrust rules for this new world, right now.

Leave a Reply

Your email address will not be published. Required fields are marked *