THE NEW YORK TIMES: AI videos have flooded social media and no one was prepared

A video on TikTok in October appeared to show a woman being interviewed by a television reporter about food stamps.
The women weren’t real. The conversation never happened.
The video was generated by artificial intelligence.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.And yet, people seemed to believe this was a real conversation about selling food stamps for cash, which would have been a crime.
In their comments, many reacted to the video as though it was real. Despite subtle red flags, hundreds vilified the woman as a criminal — some with explicit racism — while others attacked government assistance programs, just as a national debate was raging over President Donald Trump’s planned cuts to the program.
Videos like the fake interview, created with OpenAI’s new app, Sora, show how easily public perceptions can be manipulated by tools that can produce an alternate reality with a series of simple prompts.
In the two months since Sora arrived, deceptive videos have surged on TikTok, X, YouTube, Facebook and Instagram, according to experts who track them. The deluge has raised alarm over a new generation of disinformation and fakes.
Most of the major social media companies have policies that require disclosure of artificial intelligence use and broadly prohibit content intended to deceive. But those guardrails have proved woefully inadequate for the kind of technological leaps OpenAI’s tools represent.
While many videos are silly memes or cute but fake images of babies and pets, others are meant to stoke the kind of vitriol that often characterises political debate online. They have already figured in foreign influence operations, like Russia’s ongoing campaign to denigrate Ukraine.
Researchers who have tracked deceptive uses said the onus was now on companies to do more to ensure people know what is real and what isn’t.
“Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that,” said Sam Gregory, executive director of Witness, a human rights organisation focused on the threats of technology. “Could they do better in proactively looking for AI-generated information and labelling it themselves? The answer is yes, as well.”
The video about food stamps was one of several that appeared as the government shutdown in the United States was dragging on, leaving real recipients of the aid, called the Supplemental Nutrition Assistance Program, or SNAP, scrambling to feed their families.
Fox News fell for a similar fake video, treating it as an example of public outrage over the abuse of food stamps in an article that has since been removed from its website. A Fox spokesperson confirmed that the article had been removed, but did not elaborate.
The fakes have been used to mock not only poor people, but also Trump. One video on TikTok showed the White House with what sounded like a voice-over of Trump berating his Cabinet over the release of documents involving Jeffrey Epstein, the disgraced financier and convicted sex offender. According to NewsGuard, a company that tracks disinformation, the video, which was not labelled AI, was viewed by more than 3 million people in a matter of days.
Until now, the platforms have relied largely on creators to disclose that the content they are posting is not real, but the creators don’t always do so. And though there are ways for platforms like YouTube, TikTok and others to detect that a video was made using artificial intelligence, they don’t always flag it to viewers right away.
“They should have been prepared,” Nabiha Syed, executive director of the Mozilla Foundation, the tech-safety nonprofit behind the Firefox browser, said of the social media companies.
The companies behind the AI tools say they are trying to make clear to users what content is generated by computers. Sora and the rival tool offered by Google, called Veo, both embed a visible watermark onto the videos they produce.
Sora, for example, puts a “Sora” label on each video. Both companies also include invisible metadata, which can be read by a computer, that establishes the origin of each fake.
The idea is to inform people that what they are seeing is not real and to give the platforms that feature them the digital signals to automatically detect them.
Some platforms are using that technology. TikTok, apparently in response to concerns over how convincing the fake videos are, announced last week that it would tighten its rules around the disclosure of the use of AI.
It also promised new tools to let users decide how much synthetic — as opposed to genuine — content they wanted.
YouTube uses Sora’s invisible watermark to append a small label indicating that the AI videos were “altered or synthetic.”
“Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” said Jack Malon, a YouTube spokesperson.
Labels, however, can sometimes show up after thousands or even millions of people have already watched the videos. Sometimes they don’t appear at all.
People with malicious intent have discovered that it is easy to get around the disclosure rules. Some simply ignore them. Others manipulate the videos to remove the identifying watermarks. The Times found dozens of examples of Sora videos appearing on YouTube without the automated label.
Several companies have sprung up offering to remove logos and watermarks. And editing or sharing videos can wind up removing the metadata embedded in the original video indicating it was made with AI.
Even when the logos remain visible, users could miss them when scrolling quickly on their phones.
Nearly two-thirds of more than 3000 users who commented on the TikTok video about food stamps responded as if it were real, according to an analysis of the comments by The New York Times, which used AI tools to help classify the content in the comments.
Inauthentic accounts, including those run by foreign or malicious agents, play a huge role in distributing and promoting social media, but it was not clear whether they did so in the conversations around these videos.
“There’s kind of this individual vigilance model,” Mr Gregory said. “That doesn’t work if your whole timeline is stuff that you have to apply closer vigilance to. It bears no resemblance to how we interact with our things.”
In a statement, OpenAI said it prohibits deceptive or misleading uses of Sora and takes action against violators of its policies. The company said its app was just one among dozens of similar tools capable of making increasingly lifelike videos — many of which do not employ any safeguards or restrictions on use.
“AI-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort,” the company said.
(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to AI systems. The companies have denied those claims.)
A spokesperson for Meta, which owns Facebook and Instagram, said it was not always possible to label every video generated by AI, especially as the technology is fast evolving. The company, the spokesperson said, is working to improve systems to label content.
X and TikTok did not respond to requests for comment about the flood of AI fakes.
Alon Yamin, chief executive of Copyleaks, a company that detects AI content, said the social media platforms had no financial incentive to restrict the spread of the videos as long as users kept clicking on them.
“In the long term, once 90 per cent of the traffic for the content in your platform becomes AI, it begs some questions about the quality of the platform and the content,” Mr Yamin said. “So maybe longer term, there might be more financial incentives to actually moderate AI content. But in the short term, it’s not a major priority.”
The advent of realistic videos has been a boon for disinformation, fraud and foreign influence operations. Sora videos have already featured in recent Russian disinformation campaigns on TikTok and X. One video, with its watermarks crudely obscured, sought to exploit a ballooning corruption scandal among Ukraine’s political leadership. Others have created fake videos of front-line soldiers weeping.
Two former officials of a now-disbanded State Department office that fought foreign influence operations, James P. Rubin and Darjan Vujica, argued in a new article in Foreign Affairs that advancements in AI were intensifying efforts to undermine democratic countries and divide societies.
They cited AI videos in India that denigrated Muslims to stoke religious tensions. A recent one on TikTok appeared to show a man preparing biryani rice on the street with water from the gutter. Though the video had a Sora watermark and the creator said it was generated by AI, it was widely shared on X and Facebook, spread by accounts that commented on it as if it were real.
“They are making things, and will continue to make things, much worse,” Mr Vujica said in an interview, referring to the new generation of AI-made videos. “The barrier to use deepfakes as part of disinformation has collapsed, and once disinformation is spread, it’s hard to correct the record.”
This article originally appeared in The New York Times.
© 2025 The New York Times Company
Originally published on The Nightly
