The AI North Brief
15 Minutes. Every Weekday Morning. The AI Intelligence You Need.
Artificial Intelligence is evolving faster than our capability to understand its eventual impact. The AI North Brief is your daily filter, cutting through the noise to deliver only the essential news and policy shifts shaping Canada and the world.
Hosted by veteran news anchor and communications expert Paul Karwatsky, the show bridges the gap between the anchor desk and the cutting edge of AI governance. Currently pursuing his MS in AI Policy, Ethics, and Management at Purdue University, Paul brings a unique lens to the daily brief—combining decades of journalistic rigor with a deep, academic dive into the ethical frameworks and regulatory hurdles that will define the next decade.
Stay informed. Stay ahead. Subscribe to the AI North Brief today.
The AI North Brief
Training Your Replacement
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Description
A Rogers contractor spent months training the AI tool his company introduced. Then he was laid off with a thousand others. In Montreal, a think tank used AI to write a policy paper. It passed peer review, beating human submissions. And in Ottawa, AI Minister Evan Solomon secured new commitments from OpenAI on Canadian oversight following the Tumbler Ridge shooting. Three stories from the past 24 hours, each sitting at a different point on the same curve.
Sources
- Canadian Affairs. "Canada's response to AI labour disruption inadequate, sources say." March 8, 2026.
- ABC Money. "A Canadian Think Tank's AI Paper Just Beat Humans In Peer Review—And Now It's Being Debated Globally." March 9, 2026.
- CBC News. "OpenAI CEO expressed 'horror and responsibility' over ChatGPT's ties to Tumbler Ridge, AI minister says." March 4, 2026.
- The Globe and Mail. "AI Minister tells OpenAI Canadian experts must assess flagged ChatGPT conversations." March 4, 2026.
- World Economic Forum. "Future of Jobs Report 2025."
Chapter Markers
00:00 Training Your Replacement 02:45 Passing Peer Review 05:00 Ottawa and OpenAI 06:30 What's Taking Shape
His name is Devin Marsh. He's 36 years old, and he worked customer support and sales for Rogers through a third-party company called Found Ever. In 2024, his company introduced an AI tool to assist sales agents. Management told him it would make his job easier. Then it became a requirement. Then, according to an investigation published yesterday by Canadian Affairs, he spent months training the system, and by late last year, found Ever had laid off more than a thousand employees. And Marsh, well, he was one of them. This is AI North. Training your replacement. The World Economic Forum's 2025 Future of Jobs report predicts that by 2030, 40% of employers expect to cut roles where AI can handle routine tasks. The report projects 92 million job displacements globally, offset by 170 million new positions. Chris Roberts, Director of Social and Economic Policy at the Canadian Labor Congress, told Canadian Affairs that Canada's AI policy has focused primarily on stimulating the industry, with less attention so far to preparing workers for shifts in the labor market. The policy landscape varies by jurisdiction. The European Union's AI Act requires employers who use AI in hiring, evaluations, or layoffs to inform workers, ensure human oversight, and monitor the risks. Germany requires employers to consult employee representatives when AI affects hiring or scheduling and offers publicly funded training vouchers. Canada's national AI strategy is expected later this quarter. AI Minister Evan Solomon testified to Parliament in February, emphasizing commercialization, domestic computing capacity, and productivity. The strategy's approach to workforce transition hasn't been detailed as of yet. Marsh, by the way, did eventually find another job. A human recruiter recognized his empathy as strength. An AI assessment had flagged it as a weakness, apparently. Passing peer review. It beat several submissions written entirely by humans. The story, reported today by ABC Money, describes researchers at Mila, the Quebec Artificial Intelligence Institute, using a large language model to synthesize previous research, structure arguments, and draft portions of the paper. Humans edited and verified claims, but when the manuscript was sent to reviewers, they assessed it the same way they would any other scholarly work. They recommended publication. Peer review has been science's gatekeeping ritual for decades, dependent on human experts reading submissions line by line, questioning assumptions, sometimes rejecting papers. Now an algorithm has passed through that process. Some researchers find this fascinating, others are understandably uncomfortable. The foundation of academic publishing has always been the notion that expertise is human and slow. AI-assisted research compresses that timeline from months to hours. Joshua Bengio, the Touring Award winner who founded Mila, has been warning for years that AI is advancing faster than institutions can possibly adapt. Universities in the U.S. and Europe are now discussing disclosure guidelines for AI-assisted writing. Some journals are considering rules requiring authors to describe how AI was used. Others are asking whether peer review itself may need AI support just to keep pace. Solomon also requested that experts from the Canadian AI Safety Institute conduct a full assessment of OpenAI's new safety protocols. OpenAI said the company discussed steps it's taking to strengthen its law enforcement referral criteria and account for country and community context. This follows revelations that OpenAI banned the Tumblr Ridge shooter from using Chat GBT last June due to concerning interactions, but didn't alert law enforcement before the killings in February. BC's chief coroner has announced an inquest into the shootings that will consider the role of artificial intelligence. BC Attorney General Nikki Sharma, meantime, said there's a larger question for Ottawa around regulating and overseeing platforms like OpenAI. What's taking shape? So three stories from the past 24 hours: a worker who trained the AI that replaced him, a paper written by AI past peer review, and new commitments from OpenAI on Canadian oversight of safety protocols. Each one sits at a different point on the same curve, workplaces, institutions, and regulators all adjusting to technology that moves faster than the systems designed to govern it. Canada's AI strategy expected this quarter. What it prioritizes will say a lot about which of these stories get addressed first. This is AI North.