THE NEW YORK TIMES: The age of artificial intelligence political ads is upon us. Here’s what to look out for
Weird voice-overs. Fake images of politicians. Scenes that seem real until you look closely. The AI era of campaign advertising seems to be upon us — and it’s a pretty fast-moving and complex situation.

Weird voice-overs. Fake images of politicians. Scenes that seem real until you look closely.
The artificial intelligence era of campaign advertising seems to be upon us — and it’s a pretty fast-moving and complex situation.
Fortunately, The New York Times’ Tiffany Hsu, a technology reporter who writes about disinformation, has joined us to help explain how AI-generated ads are already shaping the political landscape and where this phenomenon may be going next.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.Here’s our conversation, edited and condensed:
Can they really do that?
KATIE GLUECK: Tiffany, thank you for joining. I want to start by asking you about an AI-generated video from the Senate Republican campaign arm that draws on old tweets to put words in the mouth of a fake image of James Talarico, the Democratic nominee for Senate in Texas.
My first thought was, can they really do that? Are there any guardrails right now on how public figures can be represented in AI-generated content?
HSU: They can, and they already have! In October, the same group put out an attack ad with a deepfake of Sen. Chuck Schumer, the Democratic leader from New York. There was another AI-generated attack ad around the same time targeting Zohran Mamdani, now the New York mayor, during his campaign.
Protections do exist, in theory: As of December, 26 states have laws regulating political deepfakes, most of them requiring some sort of disclosure about the use of artificial intelligence or barring deepfake distribution right before an election.
Do these laws have teeth? Debatable. The Federal Communications Commission has done some work on this front, banning AI-generated voices in robocalls (like the one in 2024 that impersonated President Joe Biden) and has considered rules about political deepfakes in television and radio ads.
What is a deepfake?
GLUECK: Remind us, what is a deepfake?
HSU: There’s a more detailed technical explanation, but when we talk about a deepfake, we usually mean a deceptive image or video of a real person.
GLUECK: How widespread do you expect AI-generated advertising to be in the midterm elections this year?
HSU: I wouldn’t be surprised to see it more frequently. The Trump administration has been pretty brazen about communicating via AI-generated memes and digitally altered content, and if the president sets the political tone, then candidates could be less cautious about tapping the technology.
Their calculus might be that the public is becoming increasingly desensitized to AI’s reality distortion effect. They’re already being bombarded with fake influencers, fake celebrities, fake war reporting. What’s another fake politician?
GLUECK: Is it mostly Republicans who have used this tactic so far, or are you seeing some Democrats dip into it too?
HSU: This is an equal-opportunity technology. Jesse Jackson Jr., who is trying to reclaim his former seat representing Illinois in Congress, released an ad this month with former Rep. Bobby Rush of Illinois delivering an endorsement in an AI-enhanced voice (his vocal cords were weakened by cancer).
A Democrat in upstate New York who is running to replace Rep. Elise Stefanik, a Republican, ran an ad over the summer that used AI-generated video of Stefanik to mock her.
The National Democratic Training Committee has discouraged candidates from using deepfakes of their opponents.
GLUECK: Are there any particularly striking examples of AI in ads that have floated under the radar?
HSU: The local elections are going to be interesting to watch.
There was one ad last fall that came out of a school board race in the Columbus, Ohio, area. Three male candidates, all Republican-endorsed, put out AI-generated clips of their three female, Democratic-backed opponents saying outrageous things. (The men lost.)
The discussion around that move — was it deceptive and dangerous or just lighthearted fun? — really encapsulated the broader debate around political deepfakes.
What are politicians doing to combat this?
GLUECK: Voters are going to get confused. Are politicians or campaigns doing anything to guard against AI-fueled misimpressions? Do experts have thoughts on what they should be doing?
HSU: There’s a quote attributed to Jonathan Swift, the 18th-century writer of “Gulliver’s Travels,” that I think about often these days: “Falsehood flies, and truth comes limping after it.”
Campaigns are on the back foot when a convincing deepfake starts circulating, because voters often form first impressions quickly and don’t double back for a fact-check.
That said, politicians are doing what they can. Some are recording every event they attend in an attempt to maintain an accurate record. Some are tapping consultants from the tech world.
A few days ago, YouTube announced a pilot program to help political candidates and others detect videos that use their AI-generated likenesses. Both Microsoft and YouTube are involved in an effort to develop technical standards to help identify where a piece of content originated and whether it is authentic.
(BEGIN OPTIONAL TRIM.)
GLUECK: Our colleague Kevin Roose told me a few weeks ago that the politics of AI don’t break down neatly along partisan lines. Does the same go for conversations about regulating AI in campaign advertising, or is the divide clearer?
HSU: Support for legislation about election deepfakes actually tends to be bipartisan!
(END OPTIONAL TRIM.)
GLUECK: Our readers are savvy and can certainly spot sloppy AI efforts. But the technology is only going to improve, right?
HSU: Oh yeah. The technology is improving faster than detection services or legislation can keep up. Every once in a while, my teammate Stuart Thompson whips up a quiz to test whether readers can tell AI-generated content apart from reality. I can say from personal experience that it’s getting harder to score respectably, even as someone who constantly has AI on the brain!
What should you look for?
GLUECK: What should voters look for to make sure that what they’re watching is real?
HSU: AI often produces images that feel unnaturally smooth and centered. Sometimes, close-ups of AI avatars’ eyes will show different reflections or, in a video, weird blinking patterns. The hairline is occasionally funky, and the background might not obey the laws of nature (we found one recent video of a purported explosion at a high-rise in Bahrain that had cars on the road blending into one another).
Of course, clever prompting can sidestep many of these tells.
This article originally appeared in The New York Times.
© 2026 The New York Times Company
Originally published on The New York Times
