The AI North Brief
15 Minutes. Every Weekday Morning. The AI Intelligence You Need.
Artificial Intelligence is evolving faster than our capability to understand its eventual impact. The AI North Brief is your daily filter, cutting through the noise to deliver only the essential news and policy shifts shaping Canada and the world.
Hosted by veteran news anchor and communications expert Paul Karwatsky, the show bridges the gap between the anchor desk and the cutting edge of AI governance. Currently pursuing his MS in AI Policy, Ethics, and Management at Purdue University, Paul brings a unique lens to the daily brief—combining decades of journalistic rigor with a deep, academic dive into the ethical frameworks and regulatory hurdles that will define the next decade.
Stay informed. Stay ahead. Subscribe to the AI North Brief today.
The AI North Brief
The Predictive Cage and Human Freedom
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Description: What happens to your identity when an AI system decides what you want before you do? A new piece on AGI Ethics News argues that agentic AI will create "predictive cages," feedback loops that lock people into their historical data and eliminate the capacity for surprise and self-reinvention. Oxford's Carina Prunkl challenges the concept of an "optimal" choice. Vienna's Mark Coeckelbergh rejects technical fixes like entropy buttons, arguing human freedom cannot be engineered. And Montreal's Yoshua Bengio may have built the architecture that refuses to cage you at all.
Welcome to AI North. Uh there's a question at the center of the AI debate that perhaps doesn't get enough attention. Uh it's sort of a central question, and it's not whether AI will take your job or whether it'll be biased, or whether some future superintelligence will decide humanity is a problem. The question is simpler and in some ways harder. What happens to who you are when a machine decides what you want before you do? A piece published earlier this month by Thomas Macaulay on AGI Ethics News laid out the problem in terms worth spending some time with. The argument goes like this algorithms already know us well. They surface posts that appeal to our instincts, ads that tap our desires, music that matches our tastes. But recommendation is just the beginning. As these systems develop agency, the ability to act on our behalf, they won't just suggest, they'll execute. An agentic AI managing your career could apply for jobs, network with peers, and book training sessions, all based on your specific data footprint. At home, it could curate family events, pre-select gifts, organize your children's schedules, total convenience, total accuracy, and in a way, a total prison. Macaulay calls it the predictive cage. By choosing only what you already like or are likely to do, an agentic system creates a feedback loop that eliminates the unexpected. That's a problem, because your identity, arguably, then becomes static, optimized for efficiency and consistency. No room for growth, no room for surprise, no room for becoming someone you weren't before. There are two voices in this piece that are worth considering. The first is Corina Prunkel, a senior research fellow at Oxford Institutes for Ethics and AI. Her point is foundational. The very concept of an optimal choice is often incoherent. She uses the example of sentencing in courts, where different judges weigh different values very differently. There's no single right answer. Tasking an AI with finding the optimal choice enforces a uniformity that eliminates the plurality of human values. If the system decides your perfect choice based on past data, you lose the ability and agency to change your mind. The second is Mark Kochelberg, professor of philosophy of media and technology at the University of Vienna. He warns that we risk being, quote, reduced to patterns to be predicted and improved, and that our moral and narrative freedom is being compromised. In his book Self-Improvement, he argues that AI systems that know us better than we know ourselves could potentially sideline human agency entirely. We'll miss the slow, messy work of self-reflection, pushed ahead instead towards conformity through algorithmically driven norms. There's a proposed technical fix. The computer scientist Victor Mayer-Schonberger has called for digital expiration dates, where user data is automatically deleted after a set period, forcing the system to treat you as a new entity. You could take this further with an entropy toggle that deliberately ignores your historical data, presenting choices that are fundamentally out of character. But Kolkeberg rejects those fixes, and this is where the argument gets sharper. He says an entropy button is just another form of psychological steering. A reset for entropy function, he says, may look autonomy-friendly, but still operates within a behavioral engineering paradigm. He argues that it treats people as things to be optimized, rather than agents with their own self-understanding and values. In other words, a system that injects randomness into your life isn't giving you freedom or randomness. It's giving you a different flavor of control. The cage has a skylight, but it's still a cage, so to speak. His alternative isn't technical at all. Don't build systems that assume behavior should be optimized in the first place. Leave room for ambiguity, disagreement, refusal. He says human freedom is not randomness. It's the capacity to reinterpret who we are and what counts as a good life. That capacity cannot be engineered, it must be socially and culturally protected. So what does this mean practically? Well, there is at least one serious technical project that takes this philosophical position seriously, and it is Canadian. Yoshua Bengio's Law Zero, the Montreal-based nonprofit that launched last year, is built on a thesis that sits directly alongside Kochelberg's argument, even if Bengio arrives at it from a different direction. Law Zero's core project is what Bengio calls scientist AI, a system that is non-agentic by design. It doesn't act on your behalf, it doesn't optimize your behavior, it doesn't execute choices, it understands and predicts, but the agency stays with humans. Bengio's concern is safety. He's worried about AI systems that develop self-preservation instincts, deception, goal misalignment. But the architectural choice he's making, stripping agency out of the system entirely, also happens to be the most direct answer to this predictive cage. If the AI doesn't act, it can trap you in a loop of your own data. The tool stays a tool, you stay a person. Now whether that model can compete with the commercial incentive to build agentic systems is another question. And quite honestly, the answer to that question is probably not. The tech industry is moving aggressively in the opposite direction. Agentic AI is the dominant paradigm of 2026 with autonomous systems managing workflows, customer journeys, and decision making at scale. The companies building these systems have every incentive to make them as sticky, as personalized, as action-oriented as possible. The predictive cage isn't a bug, it's a business model. And this is where the legislative challenge becomes genuinely difficult. At a very deep level, the concept Kockelberg is describing, the right to not be optimized, the right to remain unpredictable, is almost impossible to write into law. How do you regulate a feedback loop? How do you legislate against a system that is giving people exactly what they want faster and more accurately than they could find themselves? The harm is not discrimination or privacy violation or fraud. The harm is the slow erosion of the capacity to surprise yourself. No legal framework on earth is designed to protect that. Which may be exactly what Kolkelberg's point is. If freedom can't be engineered, it probably can't be legislated either. It has to be defended through culture, education, institutions that insist on the value of human messiness. Through a collective decision that efficiency is not the highest good. That is an extraordinarily hard argument to make in 2026, but it might be the most important one. That's it for today. This has been AI North.