Canadians are increasingly aware of the corrosive effects of disinformation on democratic institutions.
The spread of disinformation has catalyzed support for fringe movements and undermined trust in the democratic process. The Canadian government has recognized the detrimental effects of disinformation on Canadian politics and democracy.
Other countries have taken decisive steps to tackle disinformation. The European Union has passed binding regulations; Australia mandated revenue-sharing between platforms and legacy media; India banned TikTok for violating national laws.
Meanwhile, Canada’s policy response remains limited and piecemeal—it’s time for Canada to take this issue seriously.
Social Media platforms as disinformation drivers
The line between domestic disinformation and foreign interference is blurring—and Canada has yet to sufficiently respond to the disinformation proliferating on American platforms, like X and Meta. When disinformation promotes annexation or incites distrust in democratic systems, its effect is institutional erosion. The Freedom Convoy protests, the rise of Wexit separatism, and declining trust in elections have all been fueled by online disinformation.
X (formerly Twitter) has become a hub for hate speech. After Elon Musk’s takeover of X, hundreds of white nationalist accounts were unbanned. Musk himself is a perpetrator of disinformation—spreading conspiracy theories, calling former Prime Minister Trudeau ‘governor,’ and re-sharing disinformation from Rebel News.
Meta has removed Canadian news from its platform in response to federal legislation mandating the company to compensate Canadian legacy media. Meta has also ended its fact-checking program, which had existed specifically to target disinformation on the platform.
These compromised platforms are riddled with insufficient—or non-existent—fact-checking tools and Canada has been hesitant to regulate them directly.
Disinformation as foreign interference
Canada has a foreign interference problem, and it’s coming through our phones. The Communications Security Establishment (CSE) and Canadian Security Intelligence Service (CSIS) have issued repeated warnings about hostile state actors weaponizing online spaces to weaken democratic cohesion. Their strategy is familiar: polarize, destabilize, exploit.
Canadians have also become more distrustful of social media as a news source, yet, young people, aged 15 to 34, 85 per cent reported getting news or information from social media or the internet. Amid the decline of legacy media and the growing inability of Canadians to discern fact from disinformation, the Canadian government must reinvest in and support traditional journalism to combat disinformation.
If Canada goes back to the fundamentals of what the CSIS defines as foreign interference, Meta rolling back fact-checking is in conflict with the Online News Act and disallowing Canadian content on their platforms could be constituted as detrimental to the interests of Canada.
Platforms are not neutral. Their algorithms and recommendation systems optimize for engagement, not truth. The most emotionally charged content rises to the top—often conspiratorial, inflammatory, or outright false.
What can we use from Canada’s toolbox?
The spread of disinformation continues to destabilize Canadian democracy, fuel ideological extremism, and erode trust in public institutions. While Canada has been slow to act at the platform level, it already possesses legal and institutional tools that, if sharpened or reinterpreted, could offer partial solutions.
- Canada Elections Act, s. 91 (Bill C-76): Prohibits publishing false information about candidates or election officials to influence results. However, the removal of the “knowingly” standard makes it both broader and more legally vulnerable. The Act does not clearly apply to foreign actors who target Canadian audiences, leaving a gap in enforcement when it comes to disinformation originating outside the country. Clarifying or amending this section could help address the role of foreign political figures or influencers in Canadian elections.
- Criminal Code s. 319: Covers hate propaganda, including speech that promotes genocide or incites hatred. Therefore, it could apply to disinformation campaigns that promote hate against marginalized groups. However, prosecutions under this section are rare, and it doesn’t address how such content spreads algorithmically on major platforms.
- Public safety briefings: CSIS and CSE regularly warn of disinformation’s role in ideologically motivated extremism; this opens the door for invoking emergency tools if foreign interference can be proven.
- Support legacy media: In the absence of trustworthy news outlets, Canadians are left vulnerable to the spread of disinformation online. Reinvesting in legacy media is essential to ensure citizens have access to accurate, reliable information that upholds democratic discourse.
Each of these existing tools offers a narrow entry point to address disinformation, but none are currently being used in a way that reflects the scale or structure of the problem. Expanding these tools could allow for a more targeted response: adapting election laws to cover foreign interference, enforcing hate propaganda laws against coordinated disinformation campaigns, and recognizing foreign disinformation as a standalone national security threat.
So, if we have these tools, what are we missing?
The biggest shortcoming is a lack of coordinated strategy. Canada has no hard laws regulating algorithmic manipulation or platform accountability for spreading disinformation or hate speech. Although CSIS and CSE have recognized the role of disinformation, foreign disinformation still lacks the formal classification as a national security crime unless tied to espionage, terrorism, or sabotage.
If Canada is to take disinformation more seriously, it will need new and improved public policy tools that respect constitutional rights such as freedom of expression. To better understand how these platforms could be held accountable to our hate-speech laws, we can look to examples from abroad.
What can Canada learn from the EU?
The European Union’s regulatory frameworks demonstrate that serious regulation of disinformation is both feasible and enforceable. The Digital Services Act (DSA) and the Digital Markets Act shift responsibility onto platforms, requiring algorithmic transparency, systemic risk assessments, and clear moderation procedures. These aren’t vague guidelines—they come with teeth. Failure to comply can result in fines of up to six per cent of the company’s global revenue.
Canada can follow suit by shifting away from voluntary moderation and moving toward structural reform: mandating transparency about how platforms curate content, requiring that users be allowed to opt out of algorithmic recommendation systems, and ensuring that independent researchers have access to platform data.
The EU’s Code of Practice on Disinformation also shows that voluntary commitments only work when paired with regulatory pressure. U.S big tech companies, like Meta and X, have resisted transparency even under EU law, proving that self-regulation fails without legal consequences.
Canada has some legal tools, but without holding platforms accountable for how they function—not just what they host—these tools fall short. The EU’s approach offers a model: treat disinformation as a systemic risk and regulate the infrastructure that enables it.
Canada’s next steps?
If platforms continue to amplify disinformation, refuse transparency, and undermine Canadian democratic institutions, regulation alone may not be enough. In extreme cases, countries have turned to stronger measures to protect their information ecosystems.
Platform bans aren’t blunt—they’re strategic. Countries like Australia, India, and Pakistan have already taken bold steps like forcing platform compliance or imposing partial shutdowns when national interests were at stake. Australia even pushed Facebook into a revenue-sharing agreement with local media. These aren’t knee-jerk bans—they are calculated policy decisions in response to repeat offenders.
Canada has already banned TikTok from federal devices and shut down its offices in Canada. Those precedents matter. If platforms are hosting annexationist content, promoting foreign state propaganda, and dismantling trust in Canadian institutions, why should they maintain unrestricted access to Canadian users?
We have the tools, the warnings, and the global examples. What’s missing is the political will to act.