THE ECONOMIST: AI won’t bring on an apocalypse, just the end of the world economy as we know it

If Silicon Valley’s predictions are even close to being accurate, expect unprecedented upheaval
For most of history, the safest prediction has been that things will continue much as they are. But sometimes the future is unrecognisable.
The tech bosses of Silicon Valley say humanity is approaching such a moment, because in just a few years, artificial intelligence (AI) will be better than the average human being at all cognitive tasks.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.You do not need to put high odds on them being right to see that their claim needs thinking through. Were it to come true, the consequences would be as great as anything in the history of the world economy.
Since the breakthroughs of almost a decade ago, AI’s powers have repeatedly and spectacularly outrun predictions. This year, large language models from OpenAI and Google DeepMind got to gold in the International Mathematical Olympiad, 18 years sooner than experts had predicted in 2021.
The models grow ever larger, propelled by an arms race between tech firms, which expect the winner to take everything; and between China and America, which fear systemic defeat if they come second.
By 2027, it should be possible to train a model using 1000 times the computing resources that built GPT-4, which lies behind today’s most popular chatbot.
What does that say about AI’s powers in 2030 or 2032? As we describe in one of two briefings this week, many fear a hellscape, in which AI-enabled terrorists build bio-weapons that kill billions, or a “misaligned” AI slips its leash and outwits humanity.
It is easy to see why these tail risks command so much attention.
Yet, as our second briefing explains, they have crowded out thinking about the immediate, probable, predictable — and equally astonishing—effects of a non-apocalyptic AI.
Before 1700, the world economy grew, on average, by 8 per cent a century. Anyone who forecast what happened next would have seemed deranged. Over the following 300 years, as the Industrial Revolution took hold, growth averaged 350 per cent a century. That brought lower mortality and higher fertility.
Bigger populations produced more ideas, leading to even faster expansion. Because of the need to add human talent, the loop was slow. Eventually, greater riches led people to have fewer children. That boosted living standards, which grew at a steady pace of about 2 per cent a year.
Subsistence to silicon
AI faces no such demographic constraint. Technologists promise that it will rapidly hasten the pace at which discoveries are made. Sam Altman, OpenAI’s chief executive, expects AI to be capable of generating “novel insights” next year. AIs already help program better AI models. By 2028, some say, they will be overseeing their own improvement.
Hence the possibility of a second explosion of economic growth. If computing power brings about technological advances without human input, and enough of the pay-off is reinvested in building still more powerful machines, wealth could accumulate at unprecedented speed.

Economists have long been alive to the relentless mathematical logic of automating the discovery of ideas. According to a recent projection by Epoch AI, a bullish think-tank, once AI can carry out 30 per cent of tasks, annual growth will exceed 20 per cent.
True believers, including Elon Musk, conclude that self-improving AI will create a superintelligence. Humanity would gain access to every idea to be had — including for building the best robots, rockets and reactors. Access to energy and human lifespans would no longer impose limits. The only constraint on the economy would be the laws of physics.
You don’t need to go to that extreme to conjure up AI’s mind-boggling effects. Consider, as a thought experiment, just the incremental step to human-level intelligence.
In labour markets, the cost of using computing power for a task would limit the wages for carrying it out: why pay a worker more than the digital competition?
Yet the shrinking number of superstars whose skills were not automatable and could directly complement AI would enjoy enormous returns. The only people doing better than them, in all likelihood, would be the owners of AI-relevant capital, which would be gobbling up a rising share of economic output.
Everyone else would have to adapt to gaps in AI’s abilities and to the spending of the new rich. Wherever there was a bottleneck in automation and labour supply, wages could rise rapidly. Such effects, known as “cost disease”, could be so strong as to limit the explosion of measured GDP, even as the economy changed utterly.
The new patterns of abundance and shortage would be reflected in prices. Anything AI could help produce — goods from fully automated factories, say, or digital entertainment — would see its value collapse.
If you fear losing your job to AI, you can at least look forward to lots of such things. Wherever humans were still needed, cost disease might bite. Knowledge workers who switched to manual work might find they could afford less childcare or fewer restaurant meals than today. And humans might end up competing with AIs for land and energy.
This economic disruption would be reflected in financial markets. There could be wild swings between stocks as it became clear which companies were winning and losing winner-takes-all contests. There would be a rapacious desire to invest, both to generate more AI power and in order for the stock of infrastructure and factories to keep pace with economic growth. At the same time, the desire to save for the future could collapse, as people — and especially the rich, who do the most saving — anticipated vastly higher incomes.
Persuading people to give up capital for investment would therefore require much higher interest rates — high enough, perhaps, to make long-duration asset prices fall, despite explosive growth.
Scholars disagree, but in some models, interest rates rise one-for-one or more with growth. In an explosive scenario, that would mean having to refinance debts at 20-30 per cent. Even debtors whose incomes were rising fast could suffer; those whose incomes were not hitched to runaway growth would be pummelled. Countries that were unable or unwilling to exploit the AI boom could face capital flight.
There could also be macroeconomic instability anywhere, because inflation could take off as people binged on their anticipated fortunes and central banks did not raise rates fast enough.

It is a dizzying thought experiment. Could humanity cope? Growth has accelerated before, but there was no mass democracy during the Industrial Revolution; the Luddites, history’s most famous machine-haters, did not have the vote. Even if average wages surged, higher inequality could lead to demands for redistribution. The state would also have more powerful tools to monitor and manipulate the population. Politics would therefore be volatile. Governments would have to rethink everything from the tax base to education to the protection of civil rights.
Despite that, the rise of superintelligence should provoke wonder. Dario Amodei, boss of Anthropic, told The Economist this week that he believes AI will help treat once-incurable diseases. The way to look at another acceleration, if it comes, is as the continuation of a long miracle, made possible only because people embraced disruption. Humanity may find its intelligence surpassed. It will still need wisdom.
Originally published as The economics of superintelligence