The Trust Imperative: Navigating Canada’s Path to AI Adoption
- Gareth Spanglett
- Aug 13
- 4 min read
Canada was first out of the gate with a national AI strategy, but now risks being outpaced in the global race to implementation. How does the nation that pioneered foundational AI research translate its academic prestige into tangible, trustworthy innovation for all Canadians? In a recent CIPPIC Speaker Series, Jordan Zed, the federal government's Assistant Secretary for Artificial Intelligence, offered a frank assessment of this pivotal moment. His discussion centred on three critical challenges: bridging the chasm between world-class research and lagging adoption, the urgent quest for legal clarity in a shifting global landscape, and the foundational need to build public trust as the bedrock of our AI-powered future.
From Pioneer to Practitioner: The Adoption Challenge
Canada was a trailblazer in artificial intelligence, being the first country to launch a national AI strategy, built on the groundbreaking work of researchers like Yoshua Bengio, Geoffrey Hinton, and Richard Sutton. This strategy established national AI institutes in Montreal, Toronto, and Edmonton, which have been pillars of the Canadian approach. However, Zed frankly acknowledged a persistent gap between this incredible research expertise and its broader adoption across Canadian society. While we were leaders on the research side, many other countries have since caught up and, in some ways, surpassed us.
The conversation underscored that to move to the next level, Canada must focus on commercialization and adoption. This challenge is not unique to AI but is a common theme across many sectors in the country. Zed pointed to specific areas ripe with untapped potential, such as leveraging Canada's extensive healthcare data, advancing our competitive advantage in geospatial AI, and utilizing vast datasets for high-quality language translation. Unlocking this potential, however, is hampered by a lack of clarity around data use, privacy considerations, and security. The creation of the AI Secretariat itself was a response to this sense that we must seize this leadership moment and prioritize our efforts to unlock the vast potential of our data in a strategic manner.
The Quest for Legal Clarity in a Complex Global Arena
A central theme of the discussion was the pressing need for a clear legal architecture to govern AI in Canada. With the rapid evolution of technology, including large language models and agentic AI, the legal and regulatory disciplines are struggling to keep pace. Zed noted the shifting geopolitical context, with the European Union advancing its comprehensive AI Act while the United States maintains a stronger focus on innovation. This leaves Canada to chart its own course, a task complicated by significant legislative uncertainty surrounding its proposed Artificial Intelligence and Data Act (AIDA). With Bill C-27 from the former parliamentary session now widely considered defunct, it remains an open question whether the government will attempt to re-introduce a revised version or scrap it for an entirely new legislative framework.
While unable to announce a specific new framework, Zed stressed that extensive consultations and experiences have informed the advice that will be provided to the government's new political leadership. In the interim, he suggested that we must leverage our existing legal systems, particularly the flexibility of the common law, which can adapt and evolve over time through case law. However, he also recognized the inherent drawbacks of this approach, namely a lack of immediate clarity and the slow pace of the court system. The challenge lies in using the current system to thoughtfully navigate competing interests and put critical questions before the courts, while working towards a more defined regulatory structure that fosters innovation while ensuring safety and public trust.
Building Public Trust: The Bedrock of AI Adoption
Throughout the event, Zed repeatedly emphasized that building public trust is the critical and essential foundation for moving forward with AI. He acknowledged the significant public anxiety surrounding AI, from concerns about job displacement to the proliferation of disinformation. To counter this, the government's approach must be inclusive, transparent, and keep human beings at the centre.
Within the federal public service, this commitment is being put into practice through instruments like the Directive on Automated Decision-Making and specific guidance on generative AI. These measures are designed to ensure transparency, governance, and oversight, with key principles keeping humans in the loop. Furthermore, Zed confirmed that the government is working to build a public registry of all AI systems used by federal institutions and requires mandatory algorithmic impact assessments, core demands for enhancing transparency and accountability. He also highlighted the importance of public education and literacy to ensure that the adoption of AI does not exacerbate existing societal divides. By addressing public concerns head-on and embedding responsible, human-centric principles into its own use of AI, the government aims to demonstrate trustworthy stewardship of the technology, paving the way for broader societal acceptance and adoption.
Â
The path forward for Canadian AI is not merely a technical challenge, but a profound legal and social one. As Jordan Zed's discussion made clear, successfully navigating this future requires more than just better algorithms; it demands a robust legal architecture, a proactive approach to governance, and an unwavering commitment to public trust. For Canada’s legal, academic, and policy communities, the task is clear: to help thread the needle between American-style innovation and European-style regulation, ensuring that as we embrace the immense opportunities of AI, we do so in a way that is safe, equitable, and fundamentally Canadian.