THE NEW YORK TIMES: ICE shootings have exposed how AI is deepening America’s political crisis

THE NEW YORK TIMES: Experts fear that Americans are losing their ability to distinguish between fact and fiction — and that fewer people seem to care about the difference. 

Stuart A. Thompson, Tiffany Hsu and Steven Lee Myers
The New York Times
Experts fear that Americans are losing their ability to distinguish between fact and fiction — and that fewer people seem to care about the difference. 
Experts fear that Americans are losing their ability to distinguish between fact and fiction — and that fewer people seem to care about the difference.  Credit: DAVID GUTTENFELDER/NYT

The deaths of two protesters in Minneapolis at the hands of law enforcement have plunged the country into a political crisis much like the one after the police killing of George Floyd in the same city in 2020.

Now, though, advances in technology and an erosion of trust are distorting realities, both online and off, like never before.

Enormous changes have transformed the internet in the six years since Floyd’s death in Minneapolis. Artificial intelligence tools did not exist for general use in 2020; now they are everywhere.

Sign up to The Nightly's newsletters.

Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.

Email Us
By continuing you agree to our Terms and Privacy Policy.

Social media has become even more toxic. Efforts to moderate it have loosened.

The influencers behind some of the most pernicious digital lies, who once toiled in the dark corners of the internet, are now emboldened, promoted on major platforms and even mimicked by some of the most powerful people in the country.

All of these forces came together with newfound intensity in the opening weeks of the year.

After federal immigration agents shot and killed Renee Good and Alex Pretti in Minneapolis, AI fakes of the victims spread, genuine videos were viewed with suspicion, a Democratic lawmaker displayed an altered image on the Senate floor and online sleuths misidentified random people as being the agents involved in the shootings.

The federal government spread an altered image and backed provably false narratives.

Experts fear that Americans are losing their ability to distinguish between fact and fiction — and that fewer people seem to care about the difference.

The online churn that now accompanies any major news event obscures the common reference points that once helped guide the country forward.

With technology, impudence and apathy all colliding at once, the shock to American attitudes toward reality — and the public consensus required by the democratic experiment — may be a permanent one, experts said.

“In moments past, we thought that this online fever would break, and now it is a systemic feature rather than a bug,” said Graham Brookie, the senior director of the Digital Forensic Research Lab, which studies online communities.

“This is just how it is right now — we’re all collectively navigating that for the worse.”

Although these volatile forces have been amassing for years, the collective threat they posed remained largely theoretical.

Even compared with the informational chaos of 2020, which included COVID-19 conspiracy theories and baseless claims of election fraud, facts and truth now face a far more hostile environment.

Disinformation watchdogs are under increasing political pressure from Republicans, and researchers have lost funding.

The audience for fact checks is far outstripped by interest in false and misleading posts. Initiatives at social media platforms such as X and Facebook to limit or remove such content have been slashed or abandoned, leaving the digital sewage to flow directly to users.

Social media is flooded with so much dubious content, such as realistic dupes of celebrity events, that many users appear to be worn down by the effort required to establish what is genuine.

The result: an “authenticity collapse,” said Alon Yamin, CEO of Copyleaks, which offers tools to detect the presence of AI in content.

“The internet is lying by default, and the media ecosystem is just flooded with content that you know looks real, sounds real but is definitely not real,” he said. “There is a danger here of almost losing touch with reality.”

In 2020, Floyd’s murder was also shadowed by falsehoods, but they were mostly limited to conspiracy theories shared in posts on social media that reached far fewer viewers.

No artificial intelligence tools were widely available then, and social media companies funded large teams to identify and combat the falsehoods, blunting their impact. One notable video claiming that Floyd’s death was faked was shared on Facebook only 100 times.

Today’s falsehoods routinely reach millions. One image, which garnered 1.4 million views, claimed to show Mr Pretti wearing a ruffled pink dress and a tiara. (It was someone else.)

Another image, garnering 1 million views, claimed to show him helping two veterans in his role as a nurse — though it is most likely an AI-generated fake.

In Minneapolis this year, the violent clashes between protesters and federal agents have often been captured on verified video — evidence that not long ago would have settled debates on what had transpired.

Yet political influencers with millions of followers on platforms such as X and Facebook instead sought to paint Ms Good and Mr Pretti as the aggressors in their fatal interactions with law enforcement, casting doubt on what people could see with their own eyes.

The most significant transformation since 2020 comes from extremely popular AI technology.

After both shootings in Minneapolis, and other actions by federal agents there, fake videos and images circulated, depicting events that never occurred. Almost as troubling, experts said, was that some of the real content was widely dismissed as AI-generated fakery.

Videos and photographs, for instance, clearly showed Mr Pretti with a cellphone in his hand. Yet some insisted that they saw a handgun.

The faulty interpretations hinged on images “enhanced” using AI tools in an apparent attempt to increase the resolution. In the process, the tools introduced errors and other changes.

It has been difficult to keep up recently with the “incredible” scale of authentic and AI-generated content, both from regular social media users and the Trump administration, said Sandra Ristovska, founding director of the Visual Evidence Lab and associate professor of media studies at the University of Colorado Boulder.

“We have a very long history of weaponizing and manipulating images,” she said. “Social media today, coupled with the generative AI tools we’re seeing, have taken the problem to an unprecedented level.”

The turmoil in Minnesota, and the online reaction to it, was just one demonstration of reality distortion among dozens.

The relentless procession of examples last month included AI-generated fakes of Nicolás Maduro being arrested by US forces — which were the first images seen by much of the public.

When President Donald Trump shared an actual photo depicting Maduro in cuffs and a blindfold, social media users and journalists wrestled over whether it was real.

Federal officials at the highest levels of government now vigorously boost falsehoods — several administration officials initially reacted to Mr Pretti’s death by baselessly claiming he was a terrorist who wanted to massacre law enforcement officials.

Many right-wing social media users echoed the sentiment as a way to blame Mr Pretti for his death.

Last week, Trump used his Truth Social platform to attack California Gov. Gavin Newsom, a Democrat.

The president falsely claimed that Walmart was shutting down hundreds of stores across the state and shared a TikTok post featuring an AI-generated female avatar who accused Mr Newsom, without evidence, of laundering drug money for Mexican cartels.

Mr Newsom’s press office debunked the claims on X, adding: “We cannot believe we have to say any of this out loud. We cannot believe this is real life.”

This article originally appeared in The New York Times.

© 2026 The New York Times Company

Originally published on The New York Times

Comments

Latest Edition

The Nightly cover for 05-02-2026

Latest Edition

Edition Edition 5 February 20265 February 2026

Outback mystery now a suspected murder case as police investigate person known to missing four-year-old.