In 2011, a crested macaque named Naruto grabbed a wildlife photographer’s unattended camera in an Indonesian jungle and took a now-famous selfie. A decade of courtroom battles followed over a deceptively simple question: who owned the photo? The photographer? Naruto? No one? Courts ultimately decided that a non-human being cannot hold copyright. Case closed.
Now imagine the camera is an AI system—one that writes, creates, decides, and acts. The preconfigured camera of 2011 has become the pre-trained AI of 2026. Who is responsible when it gets things wrong? Who sets the rules? Who enforces them?
Canada does not have a good answer. And that absence is not a minor administrative gap; it is a structural failure with real consequences for real people.
Canada is falling behind
Artificial Intelligence is no longer the future. It is already embedded in how Canadians are hired, how they receive health care, how their creditworthiness is assessed, and how their social media feeds are curated. These systems make consequential decisions, at scale, often invisibly, often without appeal.
Yet Canada still has no comprehensive AI law. The previous government’s Bill C-27, which included the Artificial Intelligence and Data Act, died on the order paper when parliament was prorogued in January 2025. A new government now holds office, and Minister of Artificial Intelligence and Digital Innovation Evan Solomon has signalled that AI governance is a priority. The question is whether that signal will translate into action before the gap widens further.
Meanwhile, the European Union’s AI Act is in force. The United States is advancing sector-specific AI rules. The United Kingdom has issued binding guidance to regulators. Canada, once a global leader in AI research, is watching from the sidelines as other jurisdictions shape the norms that will govern this technology for a generation.
What the gap actually looks like
The governance vacuum is not abstract. Consider three areas where Canadians are affected right now.
Hiring and employment: In many workplaces, AI screening tools now filter resumes before any human reads them. Research has consistently shown that these systems can reproduce and amplify historical biases against women, racialized applicants, and people with non-traditional career paths. In Canada, no law specifically requires employers to disclose when AI is making or influencing hiring decisions, let alone audit those systems for fairness.
Health care: AI diagnostic tools are entering Canadian hospitals and clinics. These systems can be extraordinarily useful and can also fail in ways that are difficult to detect. When they do, questions of liability are unresolved. Patients may not know an AI assisted with their care. There is no national standard for how such tools should be tested, validated, or monitored after deployment.
Surveillance and public safety: Facial recognition technology has been used by Canadian police forces and retailers with minimal legal oversight. The Privacy Commissioner has found violations. Enforcement has been slow. The same AI tools that might help solve a serious crime can misidentify an innocent person and, in some communities, that risk falls disproportionately on Black and Indigenous Peoples.
In each of these cases, the technology moved faster than the rules. That is not a natural law. It is a policy choice, and it can be changed.
What Canada must do
Getting AI governance right does not require choosing between innovation and protection. Countries that have moved early on regulation have found that clear rules actually build public trust, attract responsible investment, and give domestic companies a stable environment to compete globally. The false choice between growth and accountability has cost Canada time it cannot get back.
Four things need to happen, and they need to happen together.
First, parliament must pass federal AI legislation. A new bill should establish minimum standards for transparency, accountability, and risk management for AI systems used in high-stakes settings—hiring, health, credit, criminal justice, and public services. It must include meaningful penalties for non-compliance, not just voluntary guidelines.
Second, Canada needs a dedicated AI oversight body. Existing regulators, the Privacy Commissioner, the Canadian Human Rights Commission, and sector-specific agencies lack the mandate, technical capacity, and coordination mechanisms to govern AI effectively. A new independent regulator with real investigative and enforcement powers is not optional. It is the infrastructure upon which everything else depends.
Third, public participation must be built into the process. AI governance cannot be designed in technical working groups and released as a fait accompli. Canadians, including communities most likely to be harmed by biased or opaque systems, must have meaningful input into the rules that will govern this technology. That means funded public consultations, not web-based comment portals that most people never see.
Fourth, AI transparency must become the default. Organizations deploying AI in ways that affect Canadians’ lives should be required to disclose that use, explain in plain language how decisions are made, and provide avenues for challenge and appeal. The right to know that an algorithm affected a decision about you is not a technical nicety. It is a basic condition of democratic accountability.
The camera is already clicking
Naruto the macaque could not own the photo he took, but the image went around the world regardless. The governance question came after the fact, in courtrooms, years later.
Canada is in danger of repeating that pattern at enormous scale. AI systems are already making decisions that shape people’s lives, opportunities, and freedoms. The question of who is responsible, who sets the rules, who enforces them, who can be held to account, is being answered right now, by default, in the absence of law.
Canadians deserve better than governance by accident. The new government has a window. It should use it.


