The Means of Production

Why AI Model Ownership Matters: Labor, Power, and the Future of Wages

n every age of rapid technological change, society is faced with a fundamental question: who benefits? This was true during the steam-powered upheavals of the 19th century, and it is true now in the era of artificial intelligence. While the tools and timelines differ, the underlying forces remain remarkably similar.

We are not merely witnessing the rise of a new class of software tools—we are living through a second Industrial Revolution, this time driven by code instead of coal. And just as in the first, the question of ownership is central. In this new economy, the levers of productivity are not factories or railroads, but AI models: massive algorithmic systems capable of replacing labor, extracting value, and reshaping industries. And unless we act, that value will flow—overwhelmingly and exclusively—to those who own the models.

From Machines to Models: A Shift in the Means of Production

During the first Industrial Revolution, factories and machines allowed owners to produce more goods with fewer people. Entire sectors of labor were displaced or deskilled. Traditional artisans were replaced by machine operators. Farmhands migrated to urban centers to find new forms of work. The resulting economic disruption was profound—and deeply unequal. A small group of capitalists amassed enormous wealth, while many workers endured low wages, dangerous conditions, and little recourse.

Today, we are seeing a new version of that transformation. But this time, the “machines” are not physical. They are large language models, neural networks, and algorithmic agents—trained on oceans of data and capable of generating everything from code to marketing copy to policy analysis.

These systems are not replacing brawn, but replacing cognition—tasks we long considered the exclusive domain of educated workers. Journalists, paralegals, researchers, and software developers now face the prospect of being supplemented or supplanted by AI.

And this raises the essential economic question of our time: If AI models can do the work, who gets paid?

Wages Are Being Replaced by Margins

The old deal was simple: labor contributed to production and was compensated through wages. But in the AI economy, the contribution of labor is increasingly indirect. Workers create content, data, and behaviors that are harvested to train models. Once trained, these models perform the work—at scale, at speed, and at negligible marginal cost.

The result is an economic inversion: where companies once paid thousands of people, they now license a model and collect profits. Wages are replaced by margins. Human workers are replaced by predictive engines.

This trend has already begun. Tech companies have frozen hiring, citing AI efficiencies. Customer service departments are being replaced by chatbots. Design and content creation are being delegated to generative tools. And as these systems improve, the pace of replacement will only accelerate.

Crucially, the value created by AI is not lost—it is simply redirected. It flows to the owners of the models, the holders of the capital, and the investors in the infrastructure. The economic surplus is not destroyed. It is captured.

The Concentration of AI Capital

Unlike past technologies that required physical distribution, AI is uniquely centralized. A small number of tech companies possess the data, computing power, and research talent necessary to train large-scale models. These models are protected by intellectual property law, cloud exclusivity, and massive capital barriers.

This concentration is dangerous. It creates not just economic inequality, but informational and political inequality. If a handful of entities control the dominant systems that mediate language, decision-making, hiring, and automation, they wield disproportionate influence over public life.

Even worse, many of these models are trained on data sourced from the public domain: websites, books, art, social media—often without consent or compensation. Society contributes the raw materials, but only the capital class reaps the reward.

We’ve seen this movie before: in the enclosure of common lands, in the privatization of railroads, and in the consolidation of energy companies. AI is poised to follow the same trajectory unless we rewrite the script.

Lessons from the First Industrial Revolution

There is hope—and history offers guidance.

The first Industrial Revolution gave rise to brutal conditions for the working class, but it also catalyzed the emergence of labor unions, public education, workplace safety laws, and progressive taxation. Over time, these interventions helped rebalance the social contract and ensure that the benefits of industrialization were more widely shared.

We must now do the same in the digital era. But instead of regulating machines, we must ask how to regulate models. And more importantly, how to democratize ownership.

Rethinking Ownership in the Age of AI

There are several paths forward:

  • Public AI Models: Governments could invest in open, publicly owned AI infrastructure, similar to public transit or healthcare systems. These models would prioritize accessibility, equity, and public benefit.
  • Cooperative AI Development: Unions, worker-owned platforms, and civil society organizations could develop and govern models jointly—ensuring that those whose labor contributes to training the model share in its rewards.
  • License and Dividend Models: Just as resource royalties are paid for oil or mining extraction, companies could be required to pay public dividends when their AI is trained on publicly available data or displaces jobs.
  • Model Transparency and Audits: Ownership comes with accountability. Any entity operating large-scale models should be subject to audit, disclosure, and democratic oversight.

Without these kinds of structural interventions, we risk falling into a digital feudalism: a world where power and wealth are concentrated among a few techno-lords, and the rest of society is left algorithmically managed, economically displaced, and politically disempowered.

The Time to Choose

AI will not wait. The speed of change is exponential, and the consequences of inaction grow more severe with each passing quarter. The models are here. The automation is happening. The profits are flowing.

But the rules—who owns, who benefits, who decides—are still being written.

At the Cambrian Institute, we believe that AI must serve the public interest, not just private shareholders. This is not a debate about technology. It is a debate about power, about fairness, and about the kind of civilization we want to build.

Just as we once fought for child labor laws, workplace rights, and the weekend, we must now fight to ensure that the next wave of technological progress does not recreate the injustices of the past.

Because in the end, the question of who owns the models is not just technical—it is civilizational.

 

Share the Post: