According to PYMNTS.com, generative AI is forcing a historic collision between copyright and antitrust law, two legal doctrines that have long operated separately. In a recent interview, Daryl Lim, the H. Laddie Montague Jr. Chair in Law at Penn State Dickinson Law, explained that training frontier AI models requires ingesting vast repositories of copyrighted works at an industrial scale. He notes that only a handful of vertically integrated firms control the necessary compute, data, cloud infrastructure, and distribution simultaneously. This concentration creates a “paradox of bigness,” where the scale that makes AI powerful also raises dominance concerns. Lim warns that a major risk is courts allowing dissatisfaction with licensing to substitute for proof of competitive harm, turning antitrust into a proxy for copyright enforcement. The guiding principle, he argues, must remain focused on demonstrable exclusionary conduct, not just size.
The Paradox of Bigness
Here’s the thing: AI development is inherently a game of scale. You need massive compute power and oceans of data. That’s just the recipe. But that recipe naturally, almost inevitably, leads to a market structure dominated by a few giant players who control the entire stack. It’s a classic chicken-and-egg problem. The tech needs to be big to be good, but bigness looks an awful lot like market power. Lim’s point about this being a paradox is spot on. We want powerful, reliable, safe AI. But the path to get there seems to concentrate control in ways that make antitrust regulators deeply nervous. It’s a tension with no easy answer.
Don’t Mix The Legal Tools
This is where Lim’s analysis gets really crucial. There’s a huge temptation, especially in the public debate, to smash these two legal frameworks together. People are upset about their work being used to train models without explicit permission, so they reach for the antitrust hammer to punish the big AI companies. But that’s a dangerous shortcut. Copyright law has its own internal mechanism for this exact conflict: fair use. Courts have used it for decades to navigate tech shifts, from search engines indexing the web to software reverse engineering. The core question is the same: is the use transformative learning, or is it just creating a substitute? That’s a copyright question, not an antitrust one.
Using antitrust to settle a copyright fight is like using a sledgehammer to do a scalpel’s job. You might get a result, but you’ll cause a lot of collateral damage. If a court rules that training is fair use under copyright law, but then a regulator turns around and says that same training practice is an exclusionary antitrust violation, what’s a company supposed to do? The market freezes. Innovation slows to a crawl because the rules are impossible to follow. Predictability goes out the window.
When Antitrust Should Actually Step In
So when *should* antitrust get involved? Lim gives clear, familiar examples. It’s when there’s demonstrable foreclosure of rivals. Think exclusive deals with cloud providers that lock others out of essential compute power. Or data partnerships with onerous terms that no competitor could possibly match. Or coercive contracts that effectively seal off distribution channels. These are classic exclusionary tactics that antitrust law is designed to address. The conduct, not the size. The focus has to stay on whether a company is actively blocking competition, not just on whether it’s big because it built a better, more scalable model.
The Broader Risk of Politicized Enforcement
Maybe the most important warning here is about predictability. Lim points out that antitrust is increasingly being asked to solve every problem—industrial policy, labor issues, even cultural outcomes. When enforcement becomes a tool of ideology, shifting wildly between administrations, it kills investment and confidence. Why pour billions into R&D if the legal goalposts are going to move every four years based on who’s in power? For a capital-intensive field like AI, that uncertainty is a killer. Markets need neutral, predictable rules to function. Not perfectly static rules, but ones that evolve based on evidence and coherent doctrine, not just the political mood. If we blur the lines between copyright and antitrust, and mix in a bunch of other social goals, we’ll end up with a regulatory mess that helps no one and stifles the very innovation we’re all trying to harness.
