We are living through an “AI moment”—a period of dizzying change in the field of artificial intelligence that is sowing fear, hope and confusion in equal measure.
Will the latest wave of AI technologies transform the world for the better, ushering in a new era of robot-assisted leisure? Are we on track for a dystopian future where all our jobs are replaced and any benefits accrue to an increasingly small number of global corporate overlords? Or will reality fall somewhere in the middle, with a great technological upheaval creating benefits for some and destitution for others—a more familiar historical pattern?
The truth at this stage—equally comforting and disquieting—is that no one truly knows where we are headed.
Not the AI scientists scrambling to build and release new AI models without first understanding their implications. Not the investors throwing billions at the next hype bubble. Not the governments simultaneously pushing AI adoption while debating the merits of regulation. And not the end users—workers, students, citizens—rejecting or embracing AI tools.
Underpinning all this uncertainty are the many contradictions of the AI moment itself. As we work toward a collective understanding of what artificial intelligence means—a necessary step on the path to articulating a clear collective vision for AI in our society—here are some of the most fundamental issues we must grapple with.
AI is advancing exponentially—and progress is slowing down
The term “artificial intelligence” is an umbrella for a variety of related technologies. The particular AI systems taking the world by storm right now, popularized by ChatGPT, are called large language models (LLMs). These systems are trained on massive bodies of text—essentially everything on the internet—through which they learn to predict the likely order of words in a sentence when prompted by a user’s input.
LLMs are inherently unintelligent. They have no internal “world model,” which means they do not understand what words mean and they have no logical sense of how those words fit together. But by stringing together words in a sensible way based on complex pattern recognition they create the illusion of intelligence. And that illusion is getting better and better.
Indeed, LLMs can now perform many creative and problem solving tasks that were previously thought to be impossible for machines. New generations of models are released every few months, each exceeding the capabilities of the last, driven by exponential increases in the computing power used to train them.
But there are indications that these models are not improving as quickly as they once did. If early LLMs were 70 per cent as capable as humans at a given task, later models may only be 85 per cent as capable. It requires more and more energy and money to train models that are only incrementally better than the ones that came before.
Whether LLMs will continue on a path toward artificial general intelligence (AGI)—essentially human-level intelligence across all cognitive tasks—or whether they are hitting a technological wall is fiercely debated, but it has profound consequences for the potential impacts of AI in the near future. We need to be just as prepared for AI to continue getting better, either through LLMs or other AI technologies, as we are for AI development to slow down.
AI is not good enough to do most jobs—but will replace many workers anyways
AI in its various guises has been automating labour for a long time. However, it is the threat that LLMs pose to large swaths of the knowledge economy that is keeping people up at night.
Given their fundamental lack of logic and inherent unreliability, it is tempting to assume that the latest, LLM-driven wave of AI systems cannot replace actual workers. Indeed, even the most advanced LLMs struggle with hallucinations, which means they invent fictional details or make basic factual errors that a human never would. Most jobs also require a degree of relationship building and human interaction that AI systems alone are incapable of replicating. Ultimately, there are very few jobs today that can be replaced, one-for-one, by an AI system.
Yet AI is already displacing and will continue to displace workers in many knowledge and service-based fields. How can that be?
First, many jobs do not require a high level of expertise or precision. An AI system—even one that makes occasional mistakes—need only be as competent as an average worker to be a viable replacement. And even if an AI system is not as competent as the worker it is displacing, operating most AI systems is much cheaper than paying a salary, which may be a worthwhile trade-off for profit-minded employers.
Second, even where a human worker plays an irreplaceable role, fewer of those workers are necessary when AI systems can take on an increasing share of specific tasks. Rather than a team of five researchers, for example, you may now only need one researcher to prompt, vet and compile the outputs of various AI tools and then present the results to other humans.
A “good enough” AI is a big threat to large sections of the workforce—especially younger, less-experienced workers—whether or not the underlying technology continues to improve.
AI is making us more productive—and making us dumber
AI systems, especially LLMs, are often compared to calculators. We may have lost the ability to perform long division, but computer-assisted computation allows us to work faster and take on more complex challenges than before. The latest wave of AI tools make a similar promise—by handing off routine cognitive tasks to AI, such as summarizing notes or writing emails, we can focus on more interesting and productive work.
It is too early in the modern AI era for definitive studies, but there is some early evidence to support that idea. Workers who lean on LLM assistance save time on simpler tasks leading to increases in productivity relative to their peers.
However, as with calculators and long division, this process of “cognitive offloading” comes with costs. A growing body of literature is finding that the more people depend on AI tools, the weaker their cognitive functioning and critical thinking skills become.
It’s a catch-22—the more we rely on AI tools, the more we need to be able to critically assess their outputs. Yet the more we use them, the worse we get at doing so. This is an especially large problem for the education system. Students who use AI tools to bypass actual learning are the least equipped to responsibly use those tools.
The purported productivity and well-being benefits of AI may be short-lived if they come at the cost of long-term social and cognitive decline—a tension that governments and businesses have been reluctant to consider.
AI is a boon to democracy—and a tool of oppression
Like the early days of the internet, the nascent AI revolution has an air of democracy about it. Some of the most advanced AI tools ever developed are freely available and billions of people worldwide are using them. Many who could not otherwise afford it now have access to a personalized tutor, business partner or therapist—or at least a simulacrum of one.
Those are real benefits, but they obscure systemic injustices in the development and deployment of AI systems. Among other issues, these systems are often trained on copyrighted or private works without the creators’ consent. They reproduce biases in their training data, including racial and gender discrimination. They consume vast amounts of electricity and water. And they perpetuate an economic model that sees large tech companies, mainly in the U.S., extract and exploit the data of people around the world for their own private gain. Many AI tools have been deliberately shaped to advance the particular political or ideological views of their creators without those biases being disclosed to users.
And that’s just how the models are built. The way AI tools are being used, in practice, introduces countless other concerns, such as AI-powered disinformation campaigns designed to shape public discourse and undermine the democratic process.
It is easy to view AI as an ephemeral, abstract phenomenon that exists only on our devices. But we cannot lose sight of the messy history and contested political economy of AI systems in the real world.
AI adoption is overhyped—and underappreciated
As a consequence of the preceding contradictions, artificial intelligence is both more and less consequential than many hope and fear. In 1978, the futurist Rick Amara famously observed that we tend to overestimate the effect of a technology in the short run and to underestimate the effect in the long run. Many transformational technologies have since followed that path, including smart phones and social media, both of which started out as fringe curiosities before rapidly consuming our information environments.
AI appears to be on the same path. The number of tasks that AI can competently perform today does not match the breathless claims of AI boosters. The actual economic impacts of AI today pale in comparison to the billions of dollars flowing into AI companies. And AI-fueled disinformation on social media is still largely identifiable.
Yet dismissing AI out of hand—assuming it will never deliver on its disruptive potential—is equally naive. Even if the underlying technologies stopped improving, the integration of artificial intelligence into every aspect of our lives will have profound consequences in the coming decades. By the time those effects are truly felt, it may be too late to do anything about it.
AI cannot be controlled—and must be regulated
The through line for these contradictions is a tension between the almost mythical inevitability of technological progress and the inherent uncertainty of technological adoption.
On the one hand, the cat is now out of the bag. There is no future without large language models and other AI systems playing a prominent role. But how AI comes to shape our economies, cultures, relationships and very minds depends on the collective choices we make in the formative months and years to come.
Government regulation is not the whole solution, but it has a large role to play. The tech sector will not take responsibility for protecting the public good in the face of such profound disruption.
It’s an area Canada has struggled with historically. The 2017 Pan-Canadian Artificial Intelligence Strategy was criticized for privileging the voices of industry while excluding workers and users. The 2022 Artificial Intelligence and Data Act collapsed under similar criticisms. And the latest AI Strategy Task Force, which included no representation from the labour or social justice movements, seems poised to repeat those mistakes.
Yet past failures cannot stop us from trying again. We are at a crucial societal juncture for artificial intelligence. And to arrive at an AI-saturated world that actually serves the public interest, we need a clear-eyed view of what artificial intelligence is and is not—messy contradictions and all—and a collective vision for the technological society we all want to live in.


