This year, India hosted the Global Artificial Intelligence (AI) Summit in a historically significant shift, becoming the first Global South nation to convene world leaders to debate the future of AI. The summit was the largest yet, drawing 250,000 participants including AI company executives, government leaders, and ministerial delegates. At the core of those debates were two ideas that governments are increasingly invoking but struggling to define: sovereign AI and democratic AI. 

Canada’s AI minister was not only in attendance but busy meeting with Indian officials to discuss cooperation and technology partnerships. This follows Canada and Germany’s joint declaration of intent and the launch of the Sovereign Technology Alliance, all of which are taking shape against the backdrop of an intensifying rivalry that has come to define the global AI landscape. 

U.S.–China relations are increasingly defined as a great power competition, with AI now at its centre. Leaders in both countries frame AI as the defining technology of the 21st century, one that will determine economic dominance, military superiority, and whose values get embedded in the global technological ecosystem. This zero-sum framing has pushed governance to the margins as both powers flex their geopolitical muscle to ensure rapid AI development continues unabated.

Canada is positioning itself as a potential anchor of a middle power alternative to this AI race. However, what it did not bring to the summit was a functioning AI law, meaningful worker protections, or a credible answer to how it plans to govern the technology it is so eager to develop. This is the central contradiction of the “middle power moment:” the countries best positioned to offer an alternative are the same ones still struggling to demonstrate that democratic AI governance is possible.

If Canada’s AI alliances are to matter, these countries must be prepared to set and defend global regulatory standards and rein in private ownership. Sovereignty is not proclaimed—it is demonstrated in the policy choices governments make when corporations or powerful states push back.

The cost of concentration

Genuine democratic AI governance starts with an honest account of the power it must confront. Meaningful regulation has to contend with a complex web of issues, including corporate concentration, financial speculation, dependency, and unchecked deployment of AI technologies.

A handful of companies have authority over the AI supply chain and this concentration is a direct obstacle to middle power sovereignty. Google, Microsoft, and Amazon control the majority of cloud infrastructure, which are the compute resources all large-scale AI development depends on. They also hold self-reinforcing advantages such as employee talent, access to data, and pathways to monetization. Model developers like OpenAI and Anthropic are bound to these cloud giants through licensing agreements, revenue-sharing arrangements, and intellectual property deals that ensure Big Tech captures value at every layer. Meta occupies a distinct position, simultaneously releasing open source models while monetizing AI through its advertising infrastructure and the behavioural data of billions of users. This concentration is precisely what makes unilateral national “sovereign AI” strategies insufficient. No middle power can meaningfully shift AI governance alone.

This imbalance underscores the widening global digital divide. In 2024 alone, China filed approximately 300,000 AI patent applications while the U.S. filed approximately 68,000, together accumulating the vast majority of the world’s AI knowledge base. Building that knowledge base requires resources most countries do not have. Countries without domestic cloud infrastructure, chip manufacturing capacity, access to data or research talent become structurally dependent. They access AI through platforms owned elsewhere, governed by terms set elsewhere, and optimized for markets elsewhere. Research shows this dependency extends well beyond AI itself, encompassing hardware, platforms, and intellectual property across the entire digital market. Any serious sovereignty agenda must address this imbalance.

The scale of concentration is also reflected in investment flows. The U.S. dominates AI spending, accounting for roughly 62 per cent of global private AI investment, spending $471 billion since 2013. China is the second largest spender at $119 billion. Analysts predict AI spending could reach $2.5 trillion in 2026 alone, driven by the rapid expansion of data centres and computing capacity. Much of this investment is speculative: generative AI tools that produce text, images, and video have attracted capital far beyond demonstrated productivity gains, raising warnings of a financial bubble. While hype may fade, the investments in physical infrastructure, including data centres, energy systems, and cloud capacity, represent long-term capital commitments whose environmental, economic, and social impacts will outlast the current enthusiasm.

Once investors commit capital at this scale, rapid deployment becomes a financial imperative. Corporations integrate systems across institutions and markets before oversight mechanisms catch up. AI content increasingly saturates our information ecosystems, eroding public trust in our institutions as the line between human and machine-generated material blurs. At the same time, governments and employers are expanding surveillance systems and algorithmic management tools across policing, borders, and workplaces, often with little transparency and documented bias.

Public spaces are being fortified with AI technologies, where citizens are not just consumers of AI; they are increasingly its subjects. This makes regulation more urgent than ever. However, urgency alone is insufficient. For middle powers to forge a genuine alternative, they must also contend with an organized and well-resourced opposition.

The innovation-regulation dichotomy 

Opposition to meaningful regulation is not new. Middle powers are not immune to this pressure as the alliance between technology companies, fossil fuel interests, and the military-industrial complex has spent decades perfecting the playbook for resisting accountability. 

The scale of that effort is now visible in trade policy. A recent mapping of industry submissions to U.S. trade negotiators documents 260 complaints filed by 10 American tech lobby groups, including the Computer and Communications Industry Association and the Coalition of Service Industries, targeting democratic governance frameworks across more than 40 countries. Cross-border data flow restrictions account for the largest share of complaints, followed by digital services taxes and cloud service regulations. Canada is among the most targeted, alongside the EU and Brazil. The intention is to frame every regulatory standard as a trade barrier and use bilateral pressure to dismantle it country by country.

We saw this in Canada with Bill C-27, the proposed federal privacy and AI legislation, where an industry-dominated consultation process that excluded civil society and workers contributed to years of controversy before the bill eventually died. 

The pattern persists even where regulation is most advanced. The EU’s Digital Omnibus, proposed in November 2025, is framed as reducing the administrative burden on businesses, but operates in practice as a deregulatory intervention. The Digital Omnibus weakens the definition of personal data, broadens exemptions for AI training on sensitive information, and expands the use of automated decision-making in employment contexts. While the amendments do not repeal protections outright, they narrow enforceability and redistribute responsibility for compliance away from developers. Faced with threats of capital flight or lost investment, governments too often weaken regulatory standards in the name of competitiveness.

Corporate lobbyists consistently frame AI governance as a trade-off between innovation and regulation, but the argument that guardrails on privacy, transparency, and environmental impact will undermine competitiveness does not hold up. Clear regulatory frameworks have historically provided the stability that innovation requires. Safety standards in pharmaceuticals, aviation, and finance did not eliminate innovation; they shaped it toward public trust and long-term viability. 

Protecting people from deepfakes, combatting disinformation, preventing discrimination, and preserving privacy and security are not compromises. They are the floor. Canadians are watching these protections remain unaddressed in real time, and they are looking for government action. According to a recent poll, 85 per cent of Canadians want AI to be regulated. The same poll found that 83 per cent are concerned about privacy and worried that society is becoming too dependent on AI. 

The current ‘elbows up’ political moment is fertile ground. As the U.S uses economic coercion to pressure trading partners into alignment, middle powers are discovering that sovereignty is fragile. That political energy, mounting as many nations grapple with fundamental questions about who their economies and policies actually serve, is something to build on. 

Using the tools we already have

Middle powers already have more leverage than they are using. The tools to regulate AI exist, and they are also precisely what is being targeted. What is missing is the political will to apply them and the coordination to make that will count. 

Human rights law could be applied to automated decision-making just as it applies to human decisions. International obligations reinforce those standards. Labour law could govern unsafe working conditions created by algorithmic management, automation, and exploitative data labelling practices. Consumer protection law could address deceptive AI design and manipulative chatbot interfaces. Competition law could challenge the extreme concentration of cloud infrastructure and foundation models. Environmental and energy laws could require impact assessments, emissions reporting, and water-use transparency for data centre expansion. Public procurement rules could demand auditability and human oversight in government AI contracts. Governments do not always need new laws. Existing frameworks may need targeted updates to address the specificities of AI, but that is not the same as starting from scratch. 

The stakes of getting this wrong are high. The past decade of social media offers a cautionary tale. Platforms were allowed to scale globally before any oversight was in place. The result has been exacerbated citizen polarization, intensified vulnerability among youth, information chaos, and reactive regulation that struggles to catch up. 

Coordination gives middle powers leverage they lack individually, and it is the most effective counterweight to corporate lobbying that exploits regulatory fragmentation. Critically, that effort will only be meaningful if middle powers resist deregulation pressure at home. Aligning internationally while weakening domestic safeguards would hollow out the promise of democratic, sovereign AI.

Whether these tools are used depends on how governments understand sovereignty itself. If sovereignty is reduced to technological self-sufficiency or geopolitical advantage, enforcement will always give way to competition. However, if sovereignty is understood as the democratic authority to govern AI in the public interest, then collective enforcement among middle powers becomes a substantive strategy. That reframing determines whether middle powers use the tools they have or continue to negotiate them away. 

Sovereignty through democratic processes

“Sovereignty” and “democracy” are the new buzzwords in AI governance, and they desperately need grounding in material reality.

The nationalistic framing of sovereignty reflects a Cold War logic applied to contemporary techno-politics: AI as the new nuclear arsenal, data centres as strategic assets, innovation as a tool to outpace adversaries. Prioritizing advantage or dominance fundamentally constrains our imagination for what this technology could actually do for the collective. If AI is synonymous with geopolitical competition, citizens in all nations lose.

Nevertheless, this version of sovereignty is driving policy approaches in both the U.S and China, neither of which offer democratic AI. China has built out a network of nearly 700 million surveillance cameras, using AI facial recognition to govern its population while imposing strict censorship rules across digital platforms. Meanwhile, the U.S is embedding AI in Immigration and Customs Enforcement (ICE) through partnerships with major tech firms to locate and target migrants, while using video analytics and license plate scanners to identify and catalogue activists. The White House has moved to block state AI regulations while banning government use of what it considers “woke AI” or models that address equity considerations, effectively mandating that existing patterns of racial and gender bias in algorithms be treated as the neutral baseline. In both China and the U.S, civil society has no material influence in how AI systems are developed or deployed, nor do they capture the benefits. 

Democratic AI requires actual democratic processes, alongside genuine respect for the dignity, creativity, and intelligence of the public. This is not resolved merely by making today’s models more transparent or lowering their costs, nor by policy tweaks or technological fixes alone.

It requires asking a more fundamental question: who owns the digital infrastructure, and who controls the data? A publicly owned AI entity, developed and governed on behalf of citizens rather than shareholders, would reflect a categorically different set of priorities. So would public data trusts, where people collectively control how their information is used rather than surrendering it to platforms. Consider how a publicly governed Large Language Model could work differently: shared queries build a commons, prior outputs are stored and reused, reducing the energy and compute cost of repeated generation. The architecture itself reflects democratic values, with knowledge held by the public, governed collectively, and accountable to users. Unlike current models built on the uncredited work of writers, artists, and creators, a public system could embed fair attribution and compensation into its design from the start. These institutions would give middle powers something concrete to defend internationally.

Canada’s provincial hydro utilities, public broadcasters, and federally funded research networks like CANARIE, which brought digital infrastructure across Canada in the early days of the internet, demonstrate that states can govern complex technology in the public interest when they choose to. The way forward is through better politics, not better technology. The task for middle powers is to make that choice collectively. This means doing two things at once: establishing public alternatives where the market has failed, and enforcing meaningful accountability across the private AI ecosystem that already exists. These are not competing agendas. They are two sides of the same democratic project.

From declarations to enforcement

Middle powers like Canada, Germany, Brazil, South Africa, and the Netherlands are well positioned to challenge the current AI governance landscape, but this requires moving beyond declarations toward accountability that is binding.

Accountability for AI harms needs to be built into the entire ecosystem from the foundation up to the point of deployment, rather than treated as an afterthought. Governments must hold technology companies responsible for foreseeable harms embedded in their design choices, training data, and system architecture. Deployers must be accountable for how AI systems are implemented in specific contexts, including whether meaningful human oversight exists and what social implications emerge from how those platforms are used. Liability cannot be shifted onto users or abstracted onto the systems themselves. Responsibility rests with the companies and state actors who build and deploy them. When AI causes harm, there must be rigorous investigation and real pathways for reform.

This also requires addressing the limitations of how consultation is currently practiced. Mandates for auditing AI systems have too often treated impacted communities as token voices whose input provides a veneer of public legitimacy. Researchers call this ‘participation washing’: consultation that is merely performative, with no plan for long-term partnerships and no genuine transfer of power to those most affected. This allows industry to continue business as usual, doing nothing to transform the structural conditions or fix the broken incentives driving the push to deploy AI because it is available, not because it is needed.

For middle power cooperation to have material weight beyond these domestic frameworks, it must extend into trade. Conditioning trade agreements and AI partnerships on the protection of workers’ rights, data sovereignty, and environmental standards would transform international solidarity from rhetoric into regulatory leverage. Resource sharing, market access, and research collaboration should come with binding commitments, closing the gap between the values middle powers claim to share and the rules they are willing to enforce together. This is what distinguishes a real middle power alternative from a rebranded version of the same concentrated corporate model with different flags on it.​​​​​​​​​​​​​

Seizing the moment

Competing for AI development on the U.S. and China’s terms is a race that middle powers will lose. The task is getting off that track entirely. Winning the AI race is not the goal. Demonstrating that a different model is viable and worth protecting is. 

The governance architecture we build now will determine what AI makes possible for generations to come, and middle powers have a genuine opening to shape it, but only if they are willing to be honest about what has failed so far. Sovereign AI that means nothing more than nationalist competition is not sovereignty. Democratic AI that relies on the exclusion of civil society instead of genuine power-sharing is not democracy.

The India summit was significant, but significance is not the same as rupture. Middle powers gathered to discuss the future of a technology whose infrastructure, investment flows, and governing assumptions remain controlled by the two powers they are purportedly seeking an alternative to. Whether these countries seize the moment will not be visible in the declarations they sign. It will be visible in the regulatory choices they make when corporations push back, the public institutions they are willing to build, and whose interests those institutions are designed to serve.