Policymakers and industry leaders have spent years making all manner of promises about how new advances in digital technology and artificial intelligence are going to transform the economy and our everyday life. The federal government, for its part, has proposed “more than $1 billion over the next five years to build up Canada’s artificial intelligence and quantum computing ecosystems while embedding AI technology more deeply in federal government operations.”
In trying to navigate a tumultuous shift in Canada’s relationship with the United States, Prime Minister Mark Carney and his government are advocating for an AI approach that centres AI and data sovereignty. AI Minister Evan Solomon, however, is insisting that Canada “move away from ‘over-indexing on warnings and regulation’ to make sure the economy benefits from AI.”
Canada’s failure to regulate the development and implementation of AI technologies is a huge problem. Though federal policymakers have developed many non-binding frameworks around AI, Canada lacks binding AI regulation, leaving Canadians without proper protections against AI harms to privacy and human rights.
In September 2025, the federal government launched an AI Strategy Task Force and a 30-day “national sprint” to gather public input for a renewed AI Strategy, but this initiative continues to miss the mark. As human rights and civil liberties organizations and leading academics made clear in an open letter last October, Canadians cannot be sovereign over a technology they are not protected against. In the rush to a strategy and privileging of business interests, the government is turning a blind eye to the well-documented harms, threats to privacy, and environmental costs these technologies pose.
Previous attempts at AI policy
The 2022 Artificial Intelligence and Data Act (AIDA) was Canada’s first attempt to regulate AI and address concerns regarding privacy and human rights. It attempted to make assessments of AI harm and bias but only on high-impact AI systems. By contrast, the pioneering European Union AI Act adopted a tiered risk-based approach, establishing four levels of risk for AI systems—unacceptable risk, high risk, limited risk and minimal or no risk—and applied obligations to the first three.
Many criticized the 2022 legislation because of its “exclusionary public consultation process, its vague scope and requirements, and its lack of independent regulatory oversight.” Experts and scholars called on the government to abandon the Act, saying it failed “to integrate an assessment of human rights impacts or to effectively set limits based on human rights implications.”
Experts have described the AIDA as problematic for the way it defined AI risks and harms. While the bill focused on individualistic and quantifiable understandings, it neglected to consider the environmental and community-level impacts that are more difficult to quantify. In short, the AIDA wasn’t set up to adequately equip people with proper definitions, language or understandings of harm needed for them to be able to submit complaints against AI systems.
The concerns didn’t stop there. Human, privacy and labour rights organizations also emphasized how the AIDA did not seem to be designed “to protect those in greatest need of protection against AI.” In addition, it was noted that federal consultations prioritized the private sector over the “sectors and workers vulnerable to the impacts of AI systems, marginalized communities, and civil society organizations.”
As it turned out, Bill C-27 died on the order paper when former Prime Minister Trudeau resigned and prorogued parliament. Today, there are still no binding legal obligations and policy guidelines governing the development and operation of AI in Canada.
Recent policy developments
Innovation, Science and Economic Development Canada released the results of its “national sprint” on February 3, 2026. The Report highlights many of the concerns raised by the human rights groups and academic experts in the open letter posted last fall, including on privacy, safety, transparency, accountability, proper governance, systemic bias, and environmental harms. Worries about job displacement, compensation for vulnerable workers and a secure, sovereign infrastructure were also emphasized throughout.
The need for meaningful and effective legislative guardrails to harness the potential of AI and mitigate its individual and collective harms is pressing. But it remains to be seen whether the government’s final strategy will deliver. The use of generative AI tools from large American companies such as Cohere, OpenAI, Anthropic and Google to read through the public and task force submissions and generate “unbiased reporting in record time” doesn’t inspire confidence.
What’s needed
While AI technologies are evolving at a rapid rate, policymakers must adopt a more holistic approach for assessing AI use and its impacts. To that end, the federal government should create a complaint mechanism for AI-related harms, perhaps through an AI federal ombudsperson or through collaborating with the Canadian Human Rights Commission.
In addition to a more robust complaint mechanism, it is critical to consider investigation and enforcement mechanisms such as an AI and Data Commissioner, as was previously proposed through the AIDA. These mechanisms would proactively assess AI-related harms before they occur, as opposed to relying on individuals to bring forward claims.
Policymakers claim that building public trust of AI is a priority for Canada. To do so, the federal government must reconcile the importance of AI regulations with the economic benefits of the technologies. To this end, it is imperative that the federal government introduce legally binding instruments to further protect Canadians and to promote safe use of AI in both the public and private sectors, including federal service delivery.
Canada should also establish the EU’s approach in the categorization of safe and harmful AI systems. Instead of implementing a binary understanding of AI systems, Canada should introduce a tiered risk-based approach to better assess the potential harms of all systems, not only those categorized with high impact consequences.
The AIDA defined harm as “(a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual.” Policymakers should reevaluate the definition of harm in new AI legislation and expand beyond individual, quantifiable (in dollar amount) or property interests. AI risks can impact people and communities in ways that can be difficult to quantify, such as dignity, privacy, human rights, environmental impacts, and beyond.
No short-cut will lead to an AI strategy or AI policies and regulations that can effectively protect Canadians. As AI technological advancements speed up, as industry investments grow and as those impacted by AI harms and biases are left with little recourse, it is imperative to prioritise diligent and collaborative efforts to create and implement AI policies and regulations that centre human rights.
The author would like to thank Katherine Scott and Hadrian Mertins-Kirkwood for their indispensable guidance and their insightful contributions to this op-ed. Furthermore, thank you to Professor Teresa Scassa, Professor Fenwick McKelvey, Professor Blair Attard-Frost, Ana Brandusescu and Professor Joanna Redden, interdisciplinary scholars and experts who were incredibly generous with their time and with providing their expertise in the intersecting matters of AI, policy and human rights.


