The world is in the throes of an artificial intelligence (AI) hype cycle. Tech companies are attracting billions of dollars in AI investment, AI tools are proliferating in consumer and business applications, and governments are rushing to embrace AI for its supposed productivity benefits. In their election platform, the Liberals promised to “supercharge” AI adoption as a key pillar of their economic strategy—and, after forming government, they appointed the first ever minister of artificial intelligence to cabinet. Without a transparent mandate, it is not clear what this minister will actually do, but it is clear that AI is a top priority for the Carney government.

Claims of AI revolutionizing the economy have been overstated, but the potential for AI to dramatically impact the economy and workforce in the coming years remains both a significant risk and opportunity.1Organization for Economic Co-operation and Development, The impact of Artificial Intelligence on productivity, distribution and growth: Key mechanisms, initial evidence and policy challenges, OECD, April 2024, https://www.oecd.org/en/publications/the-impact-of-artificial-intelligence-on-productivity-distribution-and-growth_8d900037-en.html. To mitigate the potential harms of AI while realizing its benefits, Canada requires a comprehensive and proactive policy approach that puts the public interest first. Governments failed to do so with other recent transformative technologies, such as social media, for which our democracies and collective mental health are now paying a dire price.2Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky & Ralph Hertwig, A systematic review of worldwide causal and correlational evidence on digital media and democracy, Nature Human Behaviour vol. 7, November 2022, https://www.nature.com/articles/s41562-022-01460-1; Fazida Karim, Azeezat A. Oyewande, Lamis F. Abdalla, Reem Chaudhry Ehsanullah & Safeera Khan, Social Media Use and Its Connection to Mental Health: A Systematic Review, Cureus vol. 12 (no. 6), June 2020, https://pmc.ncbi.nlm.nih.gov/articles/PMC7364393. We cannot afford to make the same mistake with AI.

Overview

Artificial intelligence is not new. Different forms of AI, such as machine learning systems, have been in use for decades to study patterns and make predictions in finance, meteorology and many other fields. The current wave of AI disruption is being driven specifically by breakthroughs in generative AI (Gen-AI) systems, which produce seemingly original content based on a user’s input prompt. Whether Gen-AI is truly “creative” remains a point of debate, but these systems can clearly perform—to a greater or lesser extent—many creative and problem-solving tasks that were previously assumed to be the exclusive domain of human intelligence, such as art, storytelling and therapy.3Kent F. Hubert, Kim N. Awa & Darya L. Zabelina, The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks, Scientific Reports vol. 14, February 2024, https://www.nature.com/articles/s41598-024-53303-w.

Risks of Gen-AI

These new AI systems raise various ethical, legal and practical concerns, some of which are inherent to the technology. For example, Gen-AI systems are prone to inaccuracies—a phenomenon known as AI hallucination or confabulation—which causes these systems to invent, repeat and confidently defend falsehoods. They have hidden ideological biases because of the data they were trained on.4David Rozado, Measuring Political Preferences in AI Systems: An Integrative Approach, Manhattan Institute, January 2025, https://manhattan.institute/article/measuring-political-preferences-in-ai-systems-an-integrative-approach. They have additional biases deliberately programmed by the companies that operate them.5See, for example: Kate Conger, “Employee’s Change Caused xAI’s Chatbot to Veer Into South African Politics,” The New York Times, May 16, 2025, https://www.nytimes.com/2025/05/16/technology/xai-elon-musk-south-africa.html. The data centres they require consume enormous amounts of water and electricity.6Christian Bogmans, Patricia Gomez-Gonzalez, Ganchimeg Ganpurev, Giovanni Melina, Andrea Pescatori & Sneha D. Thube, Power Hungry: How AI Will Drive Energy Demand, International Monetary Fund, April 2025, https://www.imf.org/en/Publications/WP/Issues/2025/04/21/Power-Hungry-How-AI-Will-Drive-Energy-Demand-566304.

Other issues relate to the specific way these systems have been developed and deployed. Crucially, they are largely owned by private U.S. tech companies that have an incentive to concentrate wealth, data and control in their own corporate ecosystems, including by collecting and exploiting the private and confidential data of users. Indeed, most major AI models today were trained on personal, private and copyrighted material without notifying or compensating the creator. AI systems are thus in a legal grey area when it comes to privacy, copyright and related legislation.7Innovation, Science and Economic Development Canada, Consultation on Copyright in the Age of Generative Artificial Intelligence: What we heard report, Government of Canada, February 2025, https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/consultation-copyright-age-generative-artificial-intelligence-what-we-heard-report; see also: Petra Molnar & Lex Gill, Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System, International Human Rights Program (Faculty of Law, University of Toronto) and the Citizen Lab (Munk School of Global Affairs and Public Policy, University of Toronto), September 2018, https://ihrp.law.utoronto.ca/news/canadas-adoption-ai-immigration-raises-serious-rights-implications.

Gen-AI introduces security risks, not only from the vast collection of data that ends up on servers outside Canada—including from the public institutions that use these tools—but also from AI-powered disinformation campaigns and other forms of psychological warfare.

The potential impacts of AI on the workforce are unclear. According to Statistics Canada, 60 per cent of workers in Canada are “highly exposed” to AI disruption.8Tahsin Mehdi & Marc Frenette, Exposure to Artificial Intelligence in Canadian Jobs: Experimental Estimates, Statistics Canada, September 2024, https://doi.org/10.25318/36280001202400900004-eng. That figure echoes similar international studies, which project major impacts on workers in advanced economies.9See, for example: Mauro Cazzaniga, Florence Jaumotte, Longji Li, Giovanni Melina, Augustus J Panton, Carlo Pizzinelli, Emma J. Rockall & Marina Mendes Tavares, Gen-AI: Artificial Intelligence and the Future of Work, International Monetary Fund, January 2024, https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379; Pawel Gmyrek, Janine Berg, Karol Kamiński, Filip Konopczyński, Agnieszka Ładna, Balint Nafradi, Konrad Rosłaniec & Marek Troszyński, Generative AI and Jobs: A Refined Global Index of Occupational Exposure, International Labour Organization, May 2025, https://www.ilo.org/publications/generative-ai-and-jobs-refined-global-index-occupational-exposure. However, exposure does not necessarily mean replacement. Many occupations, such as lawyers, teachers or engineers, may be highly exposed to AI without their jobs being directly at risk. Precisely how their jobs are changed remains to be seen, and will be shaped by various factors, including regulation, collective agreements and cultural expectations.

Nevertheless, there are early anecdotal signs that AI is already replacing jobs in some sectors, especially for young workers in entry-level positions.10See, for example: Chris Wilson-Smith, “AI adoption is upending the job market for entry-level workers,” The Globe and Mail, June 17, 2025, https://www.theglobeandmail.com/business/article-ai-adoption-is-upending-the-job-market-for-entry-level-workers. Youth unemployment is at levels not seen since the pandemic,11Jenna Benchetrit, “Gen Z Is Facing the Worst Youth Unemployment Rate in Decades. Here Is How It’s Different,” CBC News, June 11, 2025, https://www.cbc.ca/news/business/youth-unemployment-rate-1.7549979. and early AI adoption by some employers may be playing a role.

Aside from job displacement, workers face risks from so-called “algorithmic management,” where AI systems are used to supplement or replace traditional supervisory functions. For example, AI-powered cameras are being used to monitor and discipline drivers in commercial trucking and delivery fleets, even where those drivers have good safety and performance records.12Lauren Kaori Gurle, “Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make,” Vice, September 20, 2021, https://www.vice.com/en/article/amazons-ai-cameras-are-punishing-drivers-for-mistakes-they-didnt-make. Coupled with the inherent biases discussed above, algorithmic management may exacerbate discriminatory hiring and disciplinary practices.

The proliferation of Gen-AI also raises broader questions about the health of our societies and democracies. In the education system, for example, AI use among students is rampant despite early evidence suggesting that AI use may negatively impact critical thinking and other cognitive abilities.13Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein & Pattie Maes, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, MIT Media Lab, June 2025, https://doi.org/10.48550/arXiv.2506.08872. AI systems may be exposing children to potentially harmful content and students themselves admit they learn less when AI tools are involved.14The Alan Turing Institute, Understanding the Impacts of Generative AI Use on Children, 2025, https://www.turing.ac.uk/research/research-projects/understanding-impacts-generative-ai-use-children; KPMG, Students using generative AI confess they’re not learning as much, October 21, 2024, https://kpmg.com/ca/en/home/media/press-releases/2024/10/students-using-gen-ai-say-they-are-not-learning-as-much.html. Yet there are currently no rules or regulations governing young peoples’ use of these tools, let alone support for teachers grappling with the implications.

Opportunities in Gen-AI

Despite the risks, AI is not going away. Moreover, safe and responsible AI adoption offers potential benefits. AI that is used to empower workers, rather than replace them, could be a net positive. Canada is home to many leading AI experts, research institutes and firms, so there is also economic potential in a domestic Canadian AI industry.

In budget 2024, the federal government committed $2.4 billion to support AI infrastructure, the majority of which has been allocated to increasing computing capacity for Canadian AI researchers. The new federal government has made several additional promises in this area, including a $100-million-per-year tax credit for businesses that adopt AI systems and a promise to expedite approvals for new data centres.

The federal government is already experimenting with AI in the public service and has created an “AI Centre of Excellence” to encourage departments to adopt AI. Among other initiatives, the government is piloting an internal AI tool called CANChat, which was developed in part to discourage public servants from using commercial tools, such as ChatGPT, when handling sensitive government data.15Shared Services Canada, “CANChat: SSC’s first generative AI chatbot,” Government of Canada, last modified September 24, 2024, https://www.canada.ca/en/shared-services/campaigns/stories/canchat-sscs-first-generative-ai-chatbot.html.

However, major questions remain concerning the federal government’s long term plans for internal AI use. It is also unclear how the government plans to ensure that AI is adopted responsibly in other sensitive public and commercial sectors, such as healthcare or education, that handle personal and private data or work with vulnerable populations.

Canada’s missing regulatory framework

Governments around the world are struggling to regulate the latest generation of artificial intelligence tools.16For one of the stronger examples so far, see: European Commission, “AI Act,” last modified June 3, 2025, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. Canada is no different. The Artificial Intelligence and Data Act was tabled in 2022 but was widely criticized as being inadequate and never passed into law.17Blair Attard-Frost, “The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Regulation in Canada?” Montreal AI Ethics Institute, January 17, 2025, https://montrealethics.ai/the-death-of-canadas-artificial-intelligence-and-data-act-what-happened-and-whats-next-for-ai-regulation-in-canada. In 2024, the federal government launched the Canadian AI Safety Institute, which is funding important research projects that may eventually lead to better legislation, but it is still in its early stages. Earlier this year, the federal government also introduced an AI Strategy for the Federal Public Service, which promises but does not propose a governance and risk management framework for AI.18Treasury Board of Canada, AI Strategy for the Federal Public Service 2025-2027, Government of Canada, March 2025, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html. Little funding has been attached to regulatory work so far.

Putting these frameworks in place is of the utmost importance. Until we do, workers and citizens alike are vulnerable to opaque, foreign-controlled AI systems that may be doing more harm than good.

Actions

The AFB will dedicate $20 million for an expedited Royal Commission on Artificial Intelligence. The potential impacts of AI are so profound that we must have a clear understanding of how Canadian workers, citizens and communities want to enter this new technological age. Recognizing the urgency of the issue, the commission will work on an accelerated timeline. Within one year, it will produce a guiding vision for AI development in Canada—one that is prepared to compromise on aspirations of productivity if they do not align with Canadians’ values and priorities.

The AFB will expedite the development of a new, modernized Artificial Intelligence and Data Act that gives the federal government the necessary power to regulate the proliferation of AI tools. Among other elements, the act will ensure that any AI tool offered to the public in Canada meets minimum standards of safety, reliability and transparency, including validation by independent third parties. It will also include mechanisms to pause or roll back new AI tools where they prove to be harmful after initial approval and deployment.

The AFB will fund a new crown corporation to lead the development of a public, moonshot AI project. Despite being a leader in AI research, Canada significantly lags the U.S., China and France in AI commercialization.19Nestor Maslej, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, Tobi Walsh, Armin Hamrah, Lapo Santarlasci, Julia Betts Lotufo, Alexandra Rome, Andrew Shi & Sukrut Oak, Artificial Intelligence Index Report 2025, Institute for Human-Centered AI (Stanford University), April 2025, https://hai.stanford.edu/ai-index/2025-ai-index-report. In part, that is due to the pervasive trend of successful Canadian tech companies being bought up or otherwise relocating south of the border—a trend that is unlikely to change with the government’s current tax credit approach to industrial strategy. A publicly owned AI project could leverage Canada’s AI expertise and deliver on domestic economic priorities, including a safe and secure AI ecosystem consistent with the recommendations of the Royal Commission on AI, without being vulnerable to foreign acquisition. The AFB allocates $8 billion over four years to kickstart the project, which is comparable to the valuation of France’s Mistral AI project.20Adam Satariano, “Mistral, a French A.I. Start-Up, Is Valued at $6.2 Billion,” The New York Times, June 11, 2024, https://www.nytimes.com/2024/06/11/business/mistral-artificial-intelligence-fundraising.html.

The AFB will require that all data centres in Canada be powered by 100 per cent clean electricity. In addition, any new data centres brought online must meet at least 50 per cent of their own electricity needs through new renewable generating capacity.