Canadian online policy is marked by confusion and slow progress, especially regarding privacy. Recent legislative proposals like the Online Harms Act (OHA) and Bill C-22, the Support Authorized Access to Information Act, have sparked debates about state surveillance and mandatory metadata retention, raising fears about privacy invasions. While these concerns are valid, these acts are actually more transparent mechanisms compared to existing ones. To address privacy concerns, Canada should prioritize public oversight of such mechanisms and reform Canada’s Personal Protection and Electronic Documents Act (PIPEDA).

So, what is the confusion all about?

Debate over the original Online Harms Act (Bill C-63—a.k.a. the OHA) revealed a recurring fault line in Canadian digital policy. Specifically, the central fear has been that the legislation would enable state surveillance of Canadians. This concern has returned with An Act respecting lawful access (Bill C-22)—notably via Michael Geist, the Canada Research Chair in Internet and E-commerce Law, a Tier 1 Canada Research Chair who regularly appears before Canadian Parliamentary committees to provide expert testimony on digital policy, copyright, privacy, and online regulation in such capacity.

Geist’s framed his concerns in a blog post

Buried in the second half of Bill C-22 is a provision granting the government the power to require “core providers” to retain categories of metadata, including transmission data, for up to one year. This is mandatory metadata retention that would require telecom and electronic service providers to store information about the communications of all their users, regardless of whether those users are suspected of anything. It is one of the most privacy invasive tools a government can deploy and the international experience suggests that there are major privacy risks.

While well‑intentioned, policy frameworks like these reflect early‑internet ideals (openness, decentralization, minimal regulation) that, in today’s concentrated platform economy, tend to reinforce incumbent power thus aligning his position with major technology firms at the level of policy outcomes. In this particular instance, his criticism overlooks the fact that these companies already retain comparable categories of metadata under PIPEDA.

In addition to this resurgent concern about privacy in relation to the new bill (C-22), there has also been talk about reviving the OHA since at least December 2025, after it died when Parliament was dissolved in early 2025.

These acts are not the primary tools the Canadian government uses to acquire or operationalise online data. They target specific concerns and allow strategic access to defined data. Critics often claim that the federal government is creating a surveillance state. Unfortunately, that state already exists—but it’s not because of these legal mechanisms.

Is the threat real?

The OHA was supposed to help fight online harms like hate, gender-based violence, child exploitation, and non-consensual image sharing. Similarly, Bill C-22 looks to modernize criminal investigation tools and thus help law enforcement and the Canadian Security Intelligence Service (CSIS) protect citizens and respond to threats such as human trafficking, child exploitation, and extortion.

While vigilance about privacy and civil liberties is essential, this argument misidentifies the primary sources of digital surveillance and obscures the structural realities of the online environment.

Arguments of this nature rely on an implicit assumption: that state regulation is the dominant threat to privacy. In reality, the dominant threats arise from unregulated or weakly regulated private data extraction, compounded by foreign jurisdictional reach. This benefits service providers that also function as data brokers, including Meta and Microsoft.

While Microsoft, for example, is not a registered data broker, its December 2025 privacy statement notes that it collects “data from you, through our interactions with you and through [… its] products”, and to “provide,” and “improve and develop” its products, as well as to “advertise and market to you.” It also shares this information with third parties. This tension around terms is clear to people working in the space of digital safety.

Lisa LeVasseur, the Executive Director of Internet Safety Labs, notes that definitional ambiguity allows industry to relabel practices to evade regulation. In her own assessment, she uses the term “data fencers”, as the companies doing this are often operating in a grey area, “one that runs roughshod over privacy.” These data fencers buy, sell, and trade data about everyone using the internet. 

In the best-case, though still rather disheartening, scenarios individuals have granted access to such information by agreeing to the terms of service for any given application. Regardless, and at a constant, “these companies are harvesting data [even] when you are just browsing a site; in general, not [even] logged in. So, the concept of consent or permission is rather illusory.” The invasiveness can be as granular as turning on the microphone on your mobile without notifying you to see what is going on in your environment. Another common issue is location tracking; even “anonymized” location data can be weaponized, as demonstrated by the 2021 case in which app‑derived location data was used to identify and publicly out an American priest.

As costs rise, many Canadians increasingly rely on AI chatbots for advice, including around mental health. While this expands access, it also dramatically increases the sensitivity of data being captured outside any meaningful regulatory framework.

For one, it was recently revealed that Open AI, the company that created and provides various tiers of the most popular Large Language Model (LLM) – ChatGPT, is planning to start including advertising in its free offerings.

To understand why this should be concerning, we need to understand how advertising online works. Online advertising operates through real‑time bidding, where individual ad impressions are auctioned in milliseconds. Advertisers rely on detailed user profiles to determine bids, and the same firms selling ad space often sell or broker the underlying data collapsing any real separation between advertising and surveillance.

The second concern is that the Canada United States Mexico Agreement (CUSMA) prohibits forced data localization—essentially,  The Canadian government cannot require companies to house Canadian data inside Canada. Free flowing data makes for smoother operations, goes the argument, and any limitation would be “arbitrary”.

This concern is compounded by two additional factors: The first is Canada’s Personal Information Protection and Electronic Documents Act S.C. 2000, c. 5, more commonly referred to as PIPEDA, is out of date. This is unsurprising as it came into effect in the year 2000, an era of wide-eyed faith in the possibilities of the internet. Resultantly, PIPEDA basically allows for data to transit between businesses for any business-related interest and without the knowledge of the individuals using online services and products—these are literally called “legitimate business interests” in the text. The second factor is the Americans.

American extraterritorial surveillance

Most dominant platforms in Canada are U.S.‑based and subject to American law, including the CLOUD Act, which enables U.S. police and security agencies to gain extraterritorial data access. As a result, Canadian user data is already accessible to foreign law enforcement under legal regimes over which Canada has limited, if any, influence. 

This data extraction obviously has commercial value, not only do data-fencers sell it amongst themselves, but governments both local and far flung are purchasing and operationalising it as well. Even so, what good is this doing the average Canadian? None. Just recently, Jim Balsillie, former CEO of Blackberry and Canadian Shield Institute board member, reminded us of the ways in which our laws don’t at all protect us from exploitative data extraction leading to “markets that are neither free nor fair, driving higher costs-of-living and eroding paycheques for most of Canada’s working population.”

As noted first by Yanis Varoufakis in 2023 as a general global concern, and more recently by Vass Bednar in the specifically Canadian context, as the “U.S. state and Big Tech become one, we become digital serfs.”

Another concern is that AI companies that learn about your health, for example, aren’t bound by any traditional regulatory framework related to healthcare. They can, and will, sell your health data.

This dynamic is starkly illustrated in recent reporting by 404 Media, which documents the extent to which surveillance has become embedded in everyday life, from AI‑assisted home security systems like Ring that enable neighbourhood‑wide monitoring, to companies like Palantir that power enforcement tools for agencies like ICE, including systems used to identify and target specific communities. A January 2026 investigation further underscores the scale of this problem, revealing that the firm PenLink provides ICE with access to commercial location data derived from hundreds of millions of phones—data that the agency can query without a warrant. 

Together, these accounts demonstrate how private surveillance infrastructures not only normalize population‑level monitoring, but also actively circumvent traditional legal safeguards.

Beyond this, there are also more drastic and speculative concerns, for example The Economist recently reported in an article entitled “The start of the Iran war was determined by spying success” that advanced signal-intelligence collectors have access to an unprecedented amount of data from phones, smart devices, and modern car apps. Both American and Israeli intelligence use AI to analyze this information, with the U.S. benefiting from its connection to Silicon Valley.

What to make of all this?

At present, it is hard to say what exactly is being done with all of our online data here in Canada, we know, for example, that our government is buying significantly from the ever more controversial Palantir Technologies—in fact, a search of contracts shows 782 853 results as of April 2026 (a figure that constantly changes as contracts are tendered, signed, or expire).

What we do know, however, is that government overreach is something that has happened with detrimental effects. For example, CBC recently revealed “newly declassified RCMP Security Service files [that] confirm Canada’s Cold War-era domestic intelligence agency infiltrated and sought to disrupt legitimate political Indigenous organizations in the 1970s, in an extensive program of covert surveillance, informants and countersubversion.” This problematic behaviour continues today—e.g. with what happened in relation to the Wet’suwet’en land defenders in 2025.

Until recently, the private sector’s capacity for online surveillance far exceeded that of most democratic states, both in scale and granularity. Major social media and content platforms persistently track browsing behavior, location metadata, engagement patterns, social graphs, inferred interests and vulnerabilities. With recent developments, it is increasingly clear that some form of police state is coming into being through Public Private Partnerships). It is also clear that the kinds of acts we started this discussion with are not the problem—the problem is unchecked private surveillance and lack of clear jurisdiction of the data those actors collect.

So, what should we do?

We need to adapt to the times and update our privacy regulations; First, we need to figure out how to silo Canadian data in relation to CUSMA, then we need to acknowledge that PIPEDA is insufficient and fully explore what the European Union’s General Data Protection Regulation (GDPR) and proposed Digital Omnibus does and doesn’t do to see what we can and should adopt to our own context.

After that trash is taken out and replaced with newer and better tools and rules, we can finally start talking about other pressing matters, like “Scoping AI Chatbots into a revised Online Harms Act”.