The AI North Brief

The Tyranny of Optimization

Paul Karwatsky

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 11:04

Send us Fan Mail

Description: If the predictive cage isn't a design flaw but a business model, what kind of economic system would produce AI that serves human flourishing instead of extraction? Nobel laureates Daron Acemoglu and Elinor Ostrom, Harvard philosopher Michael Sandel, and surveillance capitalism theorist Shoshana Zuboff each offer a piece of the answer. The episode explores why market logic is structurally incompatible with healthy AI coexistence, what the commons framework offers as an alternative, and why the right question might not be "if not capitalism, then what?" but "what is the economy for?"

SPEAKER_00

The profit motive has no soul.

SPEAKER_01

Welcome to AI North Brief. So the last episode we talked about something called the predictive cage, the idea that agentic AI systems will lock us into feedback loops of our own data, optimizing our choices until there's no room left to become someone we weren't before. Pretty bleak concept. Mark Kochelberg argued that the fix can't be technical, that human freedom has to be socially and culturally protected. So not through regulations, not through laws, not through any one political system, but this idea that preserving our humanity has to be embedded in our culture if we're to grow with AI in a positive way. Protected from what exactly? Not from a rogue algorithm, not from a careless engineer, from a system of incentives that treats the prediction and manipulation of human behavior as its primary source of profit. The cage isn't a design flaw. It's kind of the business model that exists everywhere in capitalistic democratic society. So if market-driven AI development is structurally incompatible with human autonomy, what would a better system actually look like? That's what we're exploring in this episode. And the answer, it turns out, is harder than just picking a different ideology right off the shelf.

SPEAKER_00

The machine that eats what you want.

SPEAKER_01

So the clearest articulation of the problem comes from Shoshana Zuboff, Professor Emerita at Harvard Business School, who gave it a name back in 2019. Surveillance capitalism. Her argument is that the tech industry has created a new form of capitalist accumulation that claims human experience as a free raw material, translates it into behavioral data, and packages that data into prediction products sold on what she calls, quote, behavioral futures markets. The key concept is what Zuboff calls the right to the future tense, the ability to act freely without having your behavior predicted and shaped in advance by systems designed to profit from that prediction. When she wrote The Age of Surveillance Capitalism, the systems she was describing were recommendation engines targeted advertising social media feeds. In 2026, those systems have legs. They don't just predict, they act. Agentic AI is surveillance capitalism with agency. And the right to the future tense is no longer being eroded, it's being automated away. Zuboff saw this coming. She warned that the logic would extend beyond platforms into smart cities, connected homes, and virtual assistants. What she didn't fully address, and what critics have noted, is what the alternative looks like. The diagnosis is devastating, the prescription kind of less developed, and it's kind of more important than ever to develop it. For that, we need an economist. Darren Asimoglu won the Nobel Prize in Economics in 2024 for research showing that political institutions, not market forces alone, determine whether societies prosper or decay. His core argument, developed over decades and sharpened in his 2023 book Power and Progress with Simon Johnson, is that the direction of technology is not inevitable. It is chosen. And right now he believes AI is being developed in the wrong direction, and intentionally in the wrong direction. Asamoglu, who said plainly that the industry is using AI too much for automation and not enough for providing expertise and information to workers. The incentive structure rewards replacing human labor, not complementing it. The companies building these systems have every reason to make them as autonomous as possible because autonomy scales and humans don't. But Asamoglu's most important contribution to this conversation isn't his critique of automation, it's his identification of the catch-22. We need democratic institutions to redirect AI towards socially beneficial outcomes. But AI is already damaging those institutions through manipulation, polarization, the concentration of power, and a tech culture that views democratic processes as obstacles to acceleration. We need democracy to fix AI. AI is sort of breaking democracy. That's not a policy problem, that's a structural trap. On March 6th, Asamoglu sat down with Michael Sandell, the Harvard political philosopher, for a conversation published in Project Syndicate. Sandell's contribution sharpens the blade. His argument, developed in a book called What Money Can't Buy, is that over the past several decades we have drifted from having a market economy to becoming a market society. The difference is pre-key. A market economy is a tool for organizing productive activity. A tool, a market society, is one in which everything is for sale, including attention, identity, and the capacity for self-determination. Sandel connects this to what he calls the tyranny of merit, the belief among those on top that their success is entirely their own doing and therefore fully deserved. Applied to AI, the tyranny of merit becomes the tyranny of optimization. The system rewards those who are most efficiently predictable and punishes those who resist the pattern. The meritocracy becomes algorithmic, and the algorithm has no interest in whether you're flourishing or not, only in whether you're performing. Together, Asamoglu and Sandel land on something important. The problem is not that capitalism exists. The problem is that we have allowed market logic to colonize domains where it doesn't belong, where its presence actively degrades the things it touches: health, education, relationships, civic life, and now identity itself.

SPEAKER_00

The commons, the cage, and the question no market will ask itself.

SPEAKER_01

So if not pure market capitalism, then what? There's no clean alternative sitting on a shelf waiting to be adopted, but there are frameworks that take the problem seriously. The most developed comes from the work of Eleanor Ostrom, who won the Nobel Prize in Economics in 2009 for demonstrating that communities can successfully manage shared resources without either privatization or state control. Her book, Governing the Commons, showed that the so-called tragedy of the commons, the assumption that shared resources will always be overexploited, is not a law of nature. It's a failure of governance. Communities around the world have sustained common resources for centuries through self-organized institutions, shared norms, and collective decision making. A group of researchers applied Ostrom's framework directly to AI in a paper published in AI and Society called Dismantling AI Capitalism. That did not need a quote, it's a title. But their argument is that AI should be treated as a commons, not private property. Data, compute infrastructure, and the models built from them are products of collective activity. The value is socially produced, the profits are privately captured. The commons framework doesn't propose abolishing markets, it proposes that certain resources, because of their fundamental importance to society, should be governed collectively, with rules designed by the communities that depend on them. This sounds abstract until you look at what it means in practice. Platform cooperatives where the users of a system own and govern it. Public AI infrastructure, where compute capacity is treated like a utility rather than a competitive moat. Open source models developed outside the profit motive. Data trusts where individuals retain collective bargaining power over how their information is used. None of these are utopian, some of them already exist in embryotic form, and none of them on their own solve the problem. The cooperative model struggles with scale. Public infrastructure requires political will that currently doesn't exist. Open source models can be captured by commercial interests, as we covered in a previous episode, with Meta's pivot away from open source even while commissioning research promoting it. But the Commons framework does something that market fundamentalism cannot. It starts from the premise that some things should not be optimized for profit, that the purpose of an economic system is not growth for its own sake, but the conditions under which people can live well. Ostrom proved this isn't naive. It's empirically grounded. Communities do this, they have done it for centuries. The question is whether we can do it with a technology that moves faster than any commons has ever had to govern. Asamoglu believes redirecting AI requires what he calls pro-worker AI, systems designed to complement human capability rather than replace it. Sandell believes it requires recovering the idea that markets are tools, not values, and that some domains of life should be insulated from market logic entirely. Zuboff believes it requires democratic institutions strong enough to place surveillance capitalism under the rule of law. Ostrom's framework suggests it requires governance structures that emerge from the communities most affected. Now, what none of them propose is a wholesale replacement of capitalism with some other system. And that's worth sitting with, because it means the answer to the question, if not capitalism, then what? Might be the wrong question. The right question might be, what is the economy for? If the answer is growth, efficiency, and shareholder returns, then the predictive cage is not a problem to be solved, it's a feature to be celebrated. Every user perfectly predicted, every choice optimized, every inefficiency eliminated. That's the logical endpoint of a system that measures success in extraction. If the answer is human flourishing, however, then the conversation changes entirely. Then you're asking which parts of life should be subject to market logic and which should be protected from it. You're asking who owns the data that shapes your identity. You're asking whether an AI system should be allowed to act on your behalf without your ongoing meaningful consent. You're asking questions that no market, left to its own devices, will ever ask about itself. Asimoglu calls this a choice. Sandel calls it a moral question. Zubov calls it a fight. Ostrom would call it a governance problem with proven solutions if we're willing to build the institutions. The uncomfortable truth of 2026 is that we are not building them. We are currently building the cage. And we are building it fast. That's it for today. Thanks for listening to AI North.