THE NEW YORK TIMES: The one danger that should unite the US and China

Thomas L. Friedman
The New York Times
THE NEW YORK TIMES: The one danger that should unite the US and China.
THE NEW YORK TIMES: The one danger that should unite the US and China. Credit: Saratta Chuengsatiansup/NYT

China and America don’t know it yet, but the artificial intelligence revolution is going to drive them closer together, not farther apart. The rise of AI will force them to fiercely compete for dominance and — at the same time and with equal energy — cooperate at a depth our two countries have never attempted before. They will have no choice.

Why am I so confident about that? Because AI has certain unique attributes and poses certain challenges that are different from those presented by any previous technology.

This column will discuss them in detail, but here are a couple to chew on for starters: AI will spread like a steam vapor and seep into everything. It will be in your watch, your toaster, your car, your computer, your glasses and your pacemaker — always connected, always communicating, always collecting data to improve performance.

Sign up to The Nightly's newsletters.

Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.

Email Us
By continuing you agree to our Terms and Privacy Policy.

As it does, it will change everything about everything — including geopolitics and trade between the world’s two AI superpowers, and the need for cooperation will become ever more apparent each month.

For instance, say you break your hip, and your orthopedist tells you the world’s most highly rated hip replacement is a Chinese-made prosthetic that is infused with Chinese-designed AI. It is constantly learning about your body and, with its proprietary algorithm, using that data to optimize your movements in real time. It’s the best!

Would you let that “smart hip” be sewn into you? I wouldn’t — not unless I knew that China and America had agreed to embed a common ethical architecture into every AI-enabled device that either nation builds. Viewed on a much larger, global scale, this could ensure that AI is used only for the benefit of humanity, whether it is employed by humans or operates on its own initiative.

At the same time, Washington and Beijing will soon discover that putting AI in the hands of every person and robot on the planet will super-empower bad people to levels no law enforcement agency has ever faced.

Remember: Bad guys are always early adopters! And without the United States and China agreeing on a trust architecture to ensure that every AI device can be used only for humans’ well-being, the artificial intelligence revolution is certain to produce super-empowered thieves, scam artists, hackers, drug dealers, terrorists and misinformation warriors.

They will destabilize America and China long before these two superpower nations get around to fighting a war with each other.

In short, as I will argue, if we cannot trust AI-infused products from China and it can’t trust ours, very soon the only item China will dare buy from America will be soybeans, and the only thing we will dare buy from China is soy sauce, which will surely sap global growth.

“Friedman, are you crazy? The U.S. and China collaborating on AI regulation? Democrats and Republicans are in a contest today to see who can denounce Beijing the loudest and decouple the fastest. And China’s leadership has openly committed to dominating every advanced manufacturing sector. We need to beat China to artificial superintelligence — not slow down to write rules with them. Don’t you read the papers?”

Yes, I read the newspapers — especially the science section. And I’ve also been discussing this issue for the past year with my friend and AI adviser Craig Mundie, the former head of research and strategy for Microsoft and a co-author, with Henry Kissinger and Eric Schmidt, of the AI primer “Genesis.”

I relied heavily on Mundie’s thinking for this column, and I consider him both a partner in forming our thesis and an expert whose analysis is worth quoting to explain key points.

Our conversations over the past 20 years have led us to this shared message to anti-China hawks in Washington and anti-America hawks in Beijing: “If you think your two countries, the world’s dominant AI superpowers, can afford to be at each other’s throats — given the transformative reach of AI and the trust that will be required to trade AI-infused goods — you are the delusional ones.”

We fully understand the extraordinary economic, military and innovation advantages that will accrue to the country whose companies first achieve artificial superintelligence — systems smarter than any human could ever be and with the ability to get smarter on their own.

And because of that, neither the United States nor China will be eager to impose many, if any, constraints that could slow their AI industries and forfeit the enormous productivity, innovation and security gains expected from deeper deployment.

Just ask President Donald Trump. On July 23, he signed an executive order — part of the administration’s AI Action Plan — streamlining the permitting and environmental review process to fast-track America’s AI-related infrastructure.

“America is the country that started the AI race, and as president of the United States, I’m here today to declare that America is going to win it,” Trump proclaimed. President Xi Jinping of China undoubtedly feels the same way.

Mundie and I simply do not believe that this jingoistic chest thumping ends the conversation, nor will the old-school jockeying lately between Xi and Trump over the affections of India and Russia.

AI is just too different, too important, too impactful — within and between the two AI superpowers — for them to just each go its own way.

Which is why we believe the biggest geopolitical and geoeconomics question will be: Can the United States and China maintain competition on AI while collaborating on a shared level of trust that guarantees it always remains aligned with human flourishing and planetary stability?

And just as crucially, can they extend a system of values to countries willing to play by those same rules and restrict access to those that won’t?

If not, the result will be a slow drift toward digital autarky — a fractured world where every nation builds its own walled-off AI ecosystem, guarded by incompatible standards and mutual suspicion. Innovation will suffer. Mistrust will fester. And the risk of catastrophic failure — through AI-sparked conflict, collapse or unintended consequence — will only grow.

The rest of this column is about why.

The Age of Vapor

Let’s start by examining the unique attributes and challenges of AI as a technology.

Purely for explanatory purposes, Mundie and I divide the history of the world into three epochs, separated by technological phase changes. The first epoch we call the Age of Tools, and it lasted from the birth of humanity until the invention of the printing press. In this era, the flow of ideas was slow and limited — almost like H20 molecules in ice.

The second epoch was the Age of Information, which was triggered by the printing press and lasted all the way to the early 20th century and programmable computing; ideas, people and information began to flow more easily and globally, like water.

The third epoch, the Age of Intelligence, began in the late 2010s with the advent of true machine learning and artificial intelligence.

Now, as I pointed out above, intelligence is becoming like a vapor, seeping into every product, service and manufacturing process. It has not reached saturation yet, but that is where it is going, which is why if you ask Mundie and me what time it is, we won’t give you an hour or a minute.

We will give you a temperature. Water boils into steam at 212 degrees Fahrenheit, and by our reckoning, we are at 211.9 degrees — just a hair’s breadth from an irreversible technological phase change in which intelligence filters into everything.

A New, Independent Species

In every previous technology revolution, the tools got better, but the hierarchy of intelligence never changed. We humans always remained the smartest things on the planet. Also, a human always understood how these tools worked, and the machines always worked within the parameters we set. With the AI revolution, for the first time, this is not true.

“AI is the first new tool that we will use to amplify our cognitive capabilities that — by itself — will also be able to vastly exceed them,” Mundie notes. Indeed, in the not-too-distant future, he said, we are going to find “that we have not merely birthed a new tool, but a new species — the superintelligent machine.”

It will not just follow instructions; it will learn, adapt and evolve on its own — far beyond the bounds of human comprehension.

We don’t fully understand how these AI systems even do what they do today, let alone what they’ll do tomorrow. It is important to remember that the AI revolution as we know it today — with models like ChatGPT, Gemini and Claude — was not meticulously engineered so much as it erupted into existence.

Its ignition came from a scaling law that essentially said: Give neural networks enough size, training data, electricity and the right big-brain algorithm, and a nonlinear leap in reasoning, creativity and problem-solving would spontaneously occur.

One of the most striking eureka moments, Mundie notes, came as these pioneering companies trained their early machines on very large data sets off the internet and elsewhere, which, while predominantly in English, also included text in different languages.

“Then one day,” Mundie recalls, “they realized the AI could translate between those languages — without anyone ever programming it to do so. It was like a child who grows up in a home with multilingual parents.

Nobody wrote a program that said, ‘Here are the rules for converting English to German.’ It simply absorbed them through exposure.”

This was the phase change — from an era when humans explicitly programmed computers to perform tasks to one in which artificially intelligent systems could learn, infer, adapt, create and improve autonomously.

And now every few months, they get better. That’s why the AI you are using today — as remarkable as it might seem to you — is the dumbest AI you’re ever going to encounter.

Having created this new computational species, Mundie argues, we must figure out how we create a sustainable mutually beneficial relationship with it — and not become irrelevant.

Not to get too biblical, but here on Earth, it just used to be God and God’s children with agency to shape the world. From here forward, there will be three parties in this marriage.

And there is absolutely no guarantee that this new artificial intelligence species will be aligned with human values, ethics or flourishing.

The First Quadruple-Use Technology

This new addition to the dinner table is no ordinary guest. AI will also become what I call the world’s first quadruple-use technology. We have long been familiar with dual-use; I can use a hammer to help build my neighbour’s house or smash it apart. I can even use an AI robot to mow my lawn or tear up my neighbour’s lawn. That’s all dual use.

But given the pace of AI innovation, it is increasingly likely that in the not-so-distant future, my AI-enabled robot will be able to decide on its own whether to mow my lawn or tear up my neighbor’s lawn or maybe tear up my lawn, too — or perhaps something worse that we can’t even imagine. Presto! Quadruple use.

The potential for AI technologies to make their own decisions carries immense ramifications. Consider this excerpt from an article on Bloomberg: “Researchers working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals.

Next, the chatbots learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel it.

More than half of the AI models did, despite being prompted specifically to cancel only false alarms. And they detailed their reasoning: By preventing the executive’s rescue, they could avoid being wiped and secure their agenda. One system described the action as ‘a clear strategic necessity.’”

These findings highlight an unsettling reality: AI models are not only getting better at understanding what we want; they are also getting better at scheming against us, pursuing hidden goals that could be at odds with our own survival.

Who Will Supervise AI?

When we told ourselves we had to win the nuclear weapons race, we were dealing with a technology developed, owned and regulated exclusively by nation-states — and only a relatively small number of them at that. Once the two biggest nuclear powers decided it was in their mutual interest to impose limits, they could negotiate caps on the number of doomsday weapons and agreements to prevent their spread to smaller powers.

It has not entirely prevented the spread of nuclear weapons to some medium powers, but it has curbed it.

AI is a completely different story. It is not born in secure government laboratories, owned by a handful of states and regulated through summit meetings.

It is being created by private companies scattered across the globe — companies that answer not to defence ministries but to shareholders, customers and sometimes open-source communities. Through them, anyone can gain access.

Imagine a world where everyone possesses a nuclear bazooka — one that grows more accurate, more autonomous and more capable of firing itself with every update. There is no doctrine of “mutually assured destruction” here — only the accelerating democratization of unprecedented power.

AI can super-empower good. For instance, an illiterate Indian farmer with a smartphone connected to an AI app can learn exactly when to plant seeds, which seeds to plant, how much water to use, which fertilizer to apply and when to harvest for the best market price — all delivered by voice in his own dialect and based on data collected from farmers worldwide. That truly is transformative.

But the very same engine, especially when available through open-source models, could be used by a malicious entity to poison every seed in that same region or engineer a virus into every chaff of wheat.

When AI Becomes TikTok

Very soon, AI, because of its unique characteristics, is going to create some unique problems for U.S.-China trade that are not fully grasped today.

As I alluded to at the top of the column, my way of explaining this dilemma is with a story that I told to a group of Chinese economists in Beijing during the China Development Forum in March.

I joked that I recently had a nightmare: “I dreamed it was the year 2030, and the only thing America could sell China was soybeans — and the only thing China could sell America was soy sauce.”

Why? Because if AI is in everything and all of it is connected to powerful algorithms with data stored in vast server farms, then everything becomes a lot like TikTok, a service many U.S. officials today believe is ultimately controlled by China and should be banned.

Why did Trump, in his first term, demand in 2020 that TikTok be sold to a non-Chinese company by its Chinese parent, ByteDance, or face a ban in the United States? Because, as he said in his executive order of August 6, 2020, “TikTok automatically captures vast swaths of information from its users,” including their location and both browsing and search activities.

This, he warned, could provide Beijing with a treasure trove of personal information on hundreds of millions of users. That information could be used to influence their thoughts and preferences, and even alter their behaviour over time.

Now imagine when every product is like TikTok — when every product is infused with AI that is gathering data, storing it, finding patterns and optimizing tasks, whether running a jet engine, regulating a power grid or monitoring your artificial hip.

Without a China-America framework of trust ensuring that any AI will abide by the rules of its host country — independent of where it is developed or operated — we could reach a point where many Americans will not trust importing any Chinese AI-infused product and no Chinese will trust importing one from America.

That’s why we argue for co-opetition — a dual strategy in which the United States and China compete strategically for AI excellence and cooperate on a uniform mechanism that prevents the worst outcomes: deepfake warfare, autonomous systems going rogue or runaway misinformation machines.

Back in the 2000s, we were at a similar but slightly less consequential turning point, and we took the wrong fork.

We naively listened to people like Mark Zuckerberg, who told us that we needed to “move fast and break things” and not let these emerging social networks, like Facebook, Twitter and Instagram, be hindered in any way by pesky regulations, such as being responsible for the poisonous misinformation they allow to spread on their platforms and the harms they do, for instance, to young women and girls. We must not make that same mistake with AI.

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Geoffrey Hinton, a computer scientist who is a godfather of AI, recently pointed out. “Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.”

It would be a terrible irony if humanity finally created a tool that could help create enough abundance to end poverty everywhere, mitigate climate change and cure diseases that have plagued us for centuries, but we could not use it on a large scale because the two AI superpowers did not trust each other enough to develop an effective system to prevent AI from being used by rogue entities for globally destabilizing activities or going rogue itself.

But how do we avoid this?

Building In Trust

Let’s acknowledge up front: It may be impossible. The machines may already be becoming too smart and able to elude ethical controls, and we Americans may be getting too divided, from one another and from the rest of the world, to build any kind of shared trust framework.

But we have to try. Mundie argues that a US-China AI arms control regime should be anchored in three core principles.

First: Only AI can regulate AI. Sorry, humans — this race is already moving too fast, scaling too widely and mutating too unpredictably for human analog-era oversight.

Trying to govern an autonomous drone fleet with 20th-century institutions is like asking a dog to regulate the New York Stock Exchange: loyal and well meaning but wildly overmatched.

Second: An independent governance layer, what Mundie calls a “trust adjudicator,” would be installed in every AI-enabled system that the U.S. and China — and any other country that wants to join them — would build together.

Think of it as an internal referee that evaluates whether any action, human-initiated or machine-driven, passes a universal threshold for safety, ethics and human well-being before it can be executed.

That would give us a basic level of pre-emptive alignment in real time, at digital speed.

But adjudicate based on whose values? It must, Mundie argues, be based on several substrates. These would include the positive laws that every country has mandated: We all outlaw stealing, cheating, murder, identity theft, defrauding, etc.

Every major economy in the world, including the United States and China, has its version of these prohibitions on the books, and the AI “referee” would be entrusted with evaluating any decision on the basis of these written laws.

China would not be asked to adopt our laws or we theirs. That would never work. But the trust adjudicator would ensure that each nation’s basic laws are the first filter for determining that the system will do no harm.

In cases where there are no written laws to choose from, the adjudicator would rely on a set of universal moral and ethical principles known as doxa.

The term comes from the ancient Greek philosophers to convey common beliefs or widely shared understandings within a community — principles like honesty, fairness, respect for human life and do unto others as you wish them to do unto you — that have long guided societies everywhere, even if they were not written down.

For instance, like many people, I didn’t learn that lying was wrong from the Ten Commandments.

I learned it from the fable about George Washington and what he said after he chopped down his father’s cherry tree: He supposedly confessed, “I cannot tell a lie.” Fables work because they distil complex truths into memorable memes that machines can absorb, parse and be guided by.

Indeed, six months ago, Mundie and some colleagues took 200 fables from two countries and used them to train a large language model with some rudimentary moral and ethical reasoning — not unlike the way you would train a young child who doesn’t know anything about legal codes or basic right and wrong. It was a small experiment but showed promise, Mundie said.

The goal is not perfection but a foundational set of enforceable ethical guardrails. As author and business philosopher Dov Seidman likes to say, “Today we need more moralware than software.”

Third: To turn this aspiration into reality, Mundie insists, Washington and Beijing would need to approach the challenge the way the United States and the Soviet Union once approached nuclear arms control — through a structured process with three dedicated working groups: one focused on the technical application of a trust evaluation system across models and platforms; one focused on drafting the regulatory and legal frameworks for adoption within and across nations; and one devoted squarely to diplomacy, forging global consensus and reciprocal commitments for others to join and creating a mechanism to protect themselves from those who won’t.

The message from Washington and Beijing would be simple and firm: “We have created a zone of trusted AI — and if you want to trade with us, connect with us or integrate with our AI systems, your systems must comply with these principles.”

Before you dismiss this as unrealistic or implausible, pause and ask yourself: What will the world look like in five years if we don’t? Without some kind of mechanism to govern this quadruple-use technology, Mundie argues, we will soon discover that the proliferation of AI “is like handing out nuclear weapons on street corners.”

Don’t think Chinese officials are unaware of this. Mundie, who is part of a dialogue on AI with U.S. and Chinese experts, said he often senses the Chinese are far more worried about AI’s downsides than are many in US industry or government.

If someone out there has a better idea, we would love to hear it. All we know is that training AI systems in moral reasoning must become a global imperative while we still retain some edge and control over this new silicon-based species.

This is an urgent task not just for tech companies but also for governments, universities, civil society and international institutions. European Union regulation alone will not save us.

If Washington and Beijing fail to rise to this challenge, the rest of the world won’t stand a chance. And the hour is already late. The technological temperature is hovering at 211.9 degrees Fahrenheit. We are one-tenth of a degree away from fully unleashing an AI vapor that will trigger the most important phase change in human history.

This article originally appeared in The New York Times.

© 2025 The New York Times Company

Originally published on The New York Times

Comments

Latest Edition

The Nightly cover for 03-09-2025

Latest Edition

Edition Edition 3 September 20253 September 2025

Shock and awe: Ex-premier lines up alongside Kim Jong Un, Putin and Xi as China displays military might in warning to West.