At present, the Canadian government’s ability to regulate Canadian digital space is hampered by international agreements. Specifically, sections 17.19.2 and 17.19.3 of the Canada-United States-Mexico Agreement (CUSMA) form a liability shield protecting online media companies. This threatens our digital sovereignty and increases our susceptibility to online harms. Canadian negotiators should aim to remove—or at least modify—these sections during the upcoming 2026 CUSMA review. However, a backup plan is necessary.

The European Union Digital Services Act (EU DSA) could inspire a framework to address disinformation and online harms in Canada that would be permissible, even given an unchanged legal context—including the World Trade Organization General Agreement on Trade in Services (GATS). The framework should emphasize reasonable transparency and regular reviews for accountability and assurance of good faith acting on the part of private corporations that now moderate the digital public squares of the 21st century.

This framework will need a name. Let’s call it the Online Platform and Intermediary Integrity Act.

The framework could start with a voluntary code of conduct agreement on disinformation, similar to the EU’s DSA Code of Conduct. Disinformation is a central multiplier of harms—it not only confuses people but also can lead them to make decisions that are not in their own interest and may harm others. This mechanism might be called the Reasonable Code of Conduct Agreement on Disinformation.

Signatories would commit to demonetizing disinformation content and actors, ensuring political advertising transparency, clear labeling of Coordinated Inauthentic Behaviour, provisions of user empowerment tools (data privacy, additional filtering and blocking controls, and avoidance of “dark patterns” in design), and supporting independent research.

There should also be some risk-based platform obligations. Platforms could be required to conduct periodic risk assessments—annual for Very Large Online Platforms, and every second year for smaller platforms.

The definition of Very Large Online Platforms will need tailoring to the Canadian context, with options including user thresholds, adherence to EU designations, or a combinatory system.

Platforms would then be required to publish mitigation plans and implementation timelines, and regulatory authorities could request redacted assessment disclosures. The only things redacted would be those that fall under the category of trade secrets or information of similar nature.

Platforms could be required to publish standardized transparency reports, and a multi-stakeholder body would develop reporting metrics.

There would also need to be auditing and oversight. Very Large Online Platforms might be subject to annual third-party audits, with findings published in redacted form to safeguard sensitive information. An office, perhaps called the Platform Transparency and Integrity Office, would coordinate independent audits, facilitate research access to platform data, and act as a liaison with international regulatory counterparts.

In keeping with our CUSMA obligations, platforms would not be treated as content creators—unless they do actually create the content—and would, therefore, not be liable for content, which is required under CUSMA. Platforms would simply have procedural obligations (reporting, auditing, transparency).

A rapid takedown mechanism responsive to the needs of people victimized by the spread of content like “revenge porn,” in the form of a mechanism that anyone could use to report material, and which platforms would be required to review and act on quickly should also be implemented. This mechanism would also notify the users who produced the content with the given reasons and give them the option of appeal.

Finally, funding needs to be considered. Europe’s DSA essentially charges “supervisory fees” capped at 0.05 per cent of EU revenues, and it seems reasonable to mimic such a system. The disbursement of such funds would be under the purview of the Platform Transparency and Integrity Office.

This is more of a guide than an extensive elaboration. There are also two major deviations from Europe’s DSA:

  1. Very Large Online Search Engines are excluded on grounds of complexity, diplomatic risks, and Charter concerns. Focusing on Very Large Online Platforms allows Canada to target the most pressing harms, match regulatory capacity to need, and build an incremental approach that can expand later if necessary.
  2. There are no comparable individuated units like the DSA’s Digital Service Coordinators, which operate on a national level in the EU. Adapting such a structure to Canadian provinces would likely only invite disharmony to no real benefit.

Getting started

Step 1: Initiate a series of multi-stakeholder consultations to draft the anti-disinformation policy—these would include platform representatives, government officials, academics, and researchers. These are essential to work out agreeable terms and engage the relevant parties in a long-term dialogue that increases buy-in.

Step 2: The establishment of the responsible office and the creation of standards for reporting and audits would have to come into being.

Step 3: A pilot anti-disinformation policy with willing platforms and iterations based on audit findings. From here, the transparency data and evidence from the pilots could be used to build support for potential future statutory powers.

Together, these mechanisms could hamper disinformation and online harms in several ways. By demonetizing disinformation content and ensuring transparency in political advertising and Coordinated Inauthentic Behaviour, the framework would reduce the financial incentives for spreading harmful content and increase accountability. User empowerment tools to fact check and report disinformation, along with supports for independent research, would help with the identification and mitigation of harmful content. Research should further increase our capacity to perceive patterns that might be used to track developing issues, and potentially even create checkpoint-style notification systems for individual users to allow them to access resources.

Risk assessments and the publication of mitigation plans would ensure that platforms are proactive in addressing emerging threats, like those related to hate group campaigns and Technology-Facilitated Gender-Based Violence.

Transparency reports and third-party audits would enhance accountability and allow for independent scrutiny, ensuring that harmful content is addressed promptly and effectively. Additionally, the explicit legal protections from liability should encourage more voluntary cooperation and foster a less antagonistic relationship between all relevant parties.

This Canadian DSA-like framework is likely to face criticism on free expression grounds, with concerns that takedown duties could chill speech, as well as pushback over costs, and fears of government overreach. These risks can be mitigated by limiting content removals to clearly illegal content, embedding strong procedural safeguards (appeals, transparency, misuse protections), and framing supervisory fees as capped cost-recovery measures. For Very Large Online Platforms, this framework offers real advantages: regulatory certainty, alignment with the EU DSA, lower litigation risk, and reputational benefits from demonstrating accountability.

Another concern in the Canadian context is that of increased bureaucracy, which is often framed as a waste of taxpayer funds. However, as with the DSA, supervisory fees would be designed to effectively pay for the entire Online Platform and Intermediary Integrity Act. It might even make sense to frame it as the creation of new jobs.

In the end, safeguarding our digital commons demands robust policy and a spirit of collective vigilance and innovation. As we reimagine the boundaries of platform responsibility and user empowerment, the tools we forge become the architecture of a more resilient online world. It is here, at this intersection of regulation and cooperation, that the promise of a safer, fairer, and more truthful digital future can be realized for Canada. The challenge lies in our willingness to act as stewards to the vast and delicate landscape where our lives increasingly unfold. We need to forge a way ahead that mitigates the risks and harms inherent to our evermore chronically online lives.