Social media and online platforms are fueling a surge in hate speech, hate crimes, and technology-facilitated gender-based violence (TFGBV, in technical jargon) both in Canada and worldwide. Engagement-driven amplification, user anonymity, networked harassment, and cross-platform mobilization act as catalysts for spreading incendiary content and coordinating hostile networks. Although online harm is rooted in broader social problems, platforms wield considerable control over its spread—yet they largely choose not to exercise their power to curb online abuse. This is due, in part, to the absence of clear legal requirements and the profit to be made from hosting this harmful content. Even where Canada’s Privacy Commissioner does recommend that online service providers take certain actions to prevent ongoing harm, providers sometimes decline to comply.
Today’s online world is not merely a reflection of our offline world; rather, it is an active contributor to offline experiences. The online world reinforces and amplifies the harm and hatred experienced beyond technology, shaping community norms and practices in ways that manifest in our offline lives—within our families, schools, workplaces, and communities. With online hate increasingly spilling offline, experts urge comprehensive solutions to the dangers posed by online platforms’ failure to mitigate the spread of malicious content.
Digital Hate, TFGBV, and Their Offline Impact
Online hate manifests as a collection of harmful ideologies including racism, xenophobia, misogyny, homophobia, transphobia, and other forms of bigotry. Jason Hannan and Matthew McManus, leading Canadian experts on media ecology and far-right theory, note the complexity and global reach of pseudo-philosophies that are supported by the tech community, and effects like the increasing prevalence of misogyny in classrooms across the country. Increases in online hostility, they assess, often precede and exacerbate spikes in real-world violence.
Analysts point to popular platforms, including TikTok and Instagram, as major drivers of these trends, with pages like Toronto’s 6ixbuzz criticized for amplifying xenophobic commentary. One specific concern here is the rise of so-called “Active Clubs”, which recruit young men to their cause in large part through the use of social media, publicly pretending to be about fitness while espousing hateful rhetoric. Active clubs “include groups like the Canadian Proud Boys and the notorious neo-Nazi terrorist organization Atomwaffen Division, which has been linked to five killings in recent years.”Further complicating the matter, many acts of online aggression go unreported. Statistics Canada noted in 2024 that not all cyber-related hate crimes are brought to the attention of authorities, often due to fear of reprisal, disbelief, or barriers to accessing law enforcement.
Internationally, platforms have been implicated in facilitating violence, such as the United Nations’ findings about the role Facebook and other platforms played in the genocide of of Myanmar’s Rohingya population.
Technology also plays a major role in gender-based violence. TFGBV encompasses a wide range of abusive activities, including doxxing, online threats, voyeurism, stalking, location tracking, sexual exploitation, non-consensual sexting, non-consensual distribution of intimate images , and deepfakes. Although these harms affect victims both online and offline. Survivors of intimate partner violence face the highest risk of being subjected to TFGBV, which frequently overlaps with emotional, physical, sexual, financial, and psychological abuse. Marginalized girls, women, and femmes are disproportionately affected.
Existing laws have not proven effective in protecting people from TFGBV. Legal messaging can be confusing, inaccurate, or even harmful. This leaves survivors to navigate complex, lengthy, and costly processes without systemic support.
CUSMA as a Regulatory Barrier
The Canada-United States-Mexico Agreement (CUSMA) shields online media companies from liability for harmful content they host, making it difficult for Canada to address the spread of disinformation and abuse. Sections 19.17.2 and 19.17.3 specifically prohibit holding platforms responsible for third-party content. Ideally, these clauses should be removed during the upcoming 2026 review. If removal is not possible, policymakers must consider alternate legal strategies. Additionally, increased support for the mental health of survivors of online hate, improved access to justice, prevention initiatives, and community education is crucial.
Potential Legal Responses
1. Revising CUSMA and Implementing the Online Harms Act
Removing the liability shield from CUSMA would be a significant step, but not enough on its own. Advocates continue to urge policymakers to create a new Online Harms Act.
The Online Harms Act sought to combat TFGBV and other serious online harms by mandating rapid content removal, expanding legal recourse for victims, and increasing penalties for hate-related offences. The bill would have introduced a statutory duty of care for platforms, reinstated section 13 of the Canadian Human Rights Act, and created new oversight bodies. These bodies included: (1) a Digital Safety Commission with regulatory powers, (2) a Digital Safety Ombudsperson to advocate for victims, and (3) a Digital Safety Commission Tribunal for appeals and disputes.
In September, a new Combatting Hate Act (Bill C-9) was proposed and passed both first and second readings in the House of Commons, throughout October and into November it has been the subject of standing meetings, and though not all are yet available for viewing, there are details available. For example the Canadian Bar Association (CBA) submitted a brief on November 18 which notes that the definition of “hatred” is confusing—pointing specifically at the clarification clause.
Nonetheless, while Bill C-9 as currently formulated does address very important concerns such as the usage of Nazi Swastika and proposes changes to maximum penalties for acts under its purview, it is not entirely clear how effective it may be as it still contains some ambiguous language. It seems reasonable to include some of the suggested language from the CBA, like specifying the usage of symbols “for the purpose of promoting hatred”—this, of course, would leave educational historical content and films out of reach. This being said, would necessitate further definition.
2. Tort Law and the Uniform Non‑consensual Disclosure of Intimate Images Act
Tort law, which provides compensation for civil wrongs, is sometimes proposed as a mechanism for responding to online harm. Privacy torts seem to be the most appealing of the collection to respond to online harms like TFGBV. Privacy torts let someone sue for damages when another person intrudes on their private life, publicly discloses private facts, misappropriates their likeness, or places them in a false light in a way that would offend a reasonable person. However, these torts are not consistent across Canada’s provinces and territories, are generally underdeveloped in terms of their legal scope and meaning, and do not reflect how today’s technology can be used to threaten a person’s privacy interests.
The Uniform Non-consensual Disclosure of Intimate Images Act (2021) offers a model for harmonized provincial legislation. Nine provinces have adopted similar statutes. The Uniform Act provides survivors with two remedies: traditional claims for damages, and expedited content removal or de-indexing through fast-track proceedings.
The Uniform Act does not impose liability on internet intermediaries, as liability is not the purpose of the proposed legislation and “internet intermediary liability would have important implications for freedom of expression.” Instead, the Uniform Act proposes that intermediaries are not liable where they take “reasonable steps” to remove unlawful content and comply with takedown orders. What qualifies as “reasonable steps” is ultimately left for the courts to determine.
At present, nine provinces (all but Ontario) have statutory causes of action for non-consensual disclosure of intimate images, mostly upholding the spirit of the Uniform Act. But in four provinces, the legislation refers only to the individual who posted the content as being subject to removal, takedown, or de-indexing orders, and makes no reference to the platforms. This limits survivors’ ability to control the spread of unlawful material.
Five provinces (British Columbia, Saskatchewan, Quebec, New Brunswick, and Prince Edward Island) do allow courts to order intermediaries to remove or de-index content. Three of those have adopted the limited liability provisions proposed by the Uniform Act. Saskatchewan has adopted no explicit provisions on intermediary liability, while Quebec has adopted a general liability provision that does not include the Uniform Act’s “reasonable steps” limitation.
3. Copyright Law and Online Harm
Some view copyright law as an increasingly relevant tool for addressing certain forms of online harm. For example, Denmark recently made headlines with a proposed law that would extend copyright protection to a person’s likeness, voice, and style, aiming to safeguard citizens against the creation and spread of deepfakes. Others have considered that copyright law may be used to protect survivors of online abuse when they are themselves the creators of content that is shared without permission.
However, copyright law is not the perfect answer to this problem. It has not yet been tested in such context, and, because copyright law in Canada is driven primarily by economic considerations, it does not translate well when applied in the context of online harmsIn cases of online harm, economic interests often take a back seat to concerns such as privacy, equality, dignity, and personal safety.
4. Adopting a DSA-like Framework
Another option is adopting a framework modeled on the EU’s Digital Services Act (DSA). A DSA-like framework could offer procedural solutions, strategically focusing on Very Large Online Platforms (VLOPs).
This approach would emphasize transparency, regular reporting, and periodic risk assessments centred on threats like child safety, TFGBV, and hate content. A code of conduct, similar to the DSA’s, could introduce enhanced user empowerment tools, ban deceptive design patterns, and support research. Establishing a Platform Transparency and Integrity Office, alongside third-party audits, would help ensure compliance and continuous improvement. Importantly, the framework would avoid direct platform liability to remain consistent with Canada’s CUSMA obligations, instead encouraging proactive platform action through procedural requirements.
Online is real life
There are many problems that prevent Canada’s legal system from effectively responding to online harms. If nothing changes with respect to CUSMA, survivors of online harm will be left with a legal framework that does not meaningfully support their efforts to correct, mitigate, and/or recover from that harm.
Neither tort law nor copyright law, as they currently exist, provide meaningful redress for survivors of online hate and TFGBV. Tort law is still underdeveloped with respect to privacy torts, which do not sufficiently reflect the role that technology plays in online abuse. As for copyright law, neither privacy interests nor dignity interests centre prominently. It is not a responsive tool for addressing the personal and reputational harms caused by online harm.
The federal government should play an important role in addressing these shortfalls. It should introduce a new version of Bill C-63 regardless of what happens in relation to the recently proposed Combatting Hate Act, to finally enact online harms legislation that would impose substantive accountability on the digital platforms that facilitate the spread of this harm. Without this legislation, the government misses a critical opportunity to protect its citizens—especially vulnerable communities—from the dangers posed by platforms that currently bear no liability for their role in enabling harm. That inaction is a decision in itself.
A DSA-like framework could potentially be used to address online hate and TFGBV. If that were the case, it may also be worth exploring the development of more stringent hate laws – perhaps something comparable to what Germany has implemented and expanded. Such a shift would require substantial refinement, given the serious legal considerations related to Canada’s international obligations. It is not yet clear that the Combatting Hate Act will suffice, though it is definitely a step in the right direction.
The pace of technological development is unlikely to slow, and the associated harms will not go away without intervention. Whatever happens next, it is important that the government moves quickly.


