THE NEW YORK TIMES: How generated AI videos are distorting your child’s YouTube feed

The algorithm pushes bizarre, often nonsensical, artificial intelligence-generated videos from channels claiming to teach ‘toddlers’ about the alphabet and animals.

Arijeta Lajka
The New York Times
YouTube is recommending AI generated videos to children.
YouTube is recommending AI generated videos to children. Credit: adobestock/Timon - stock.adobe.com

Four seconds into one version of “Old MacDonald Had a Farm,” an animated horse with two arms and four legs hatches from an egg.

In another video, a pink elephant, an orange flamingo and other animals appear next to letters of the alphabet, performing complicated gymnastic maneuvers on tightropes.

And in another video, animals form from paint being squirted into a glass of water and inexplicably grow mermaid tails.

Sign up to The Nightly's newsletters.

Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.

Email Us
By continuing you agree to our Terms and Privacy Policy.

The New York Times reviewed these clips, along with more than 1,000 other videos recommended to young children on YouTube, and found that the algorithm pushes bizarre, often nonsensical, artificial intelligence-generated videos from channels claiming to teach “toddlers” and “preschoolers” about the alphabet and animals.

In some videos, animals and people have warped faces or extra body parts. Often, the videos contain garbled text. Most clips have incoherent narratives, some riddled with misinformation. And none are longer than about 30 seconds, allowing little time to develop ideas, plots or any sense of repetition that is often necessary for learning.

Now produced with the help of readily available AI tools and online tutorials, many of these videos have millions of views and counting, with channels churning out videos at a rapid rate, sometimes even multiple times a day.

Do the creators behind the videos actually want to help? Or do they just want to grab your kids’ attention?

Many of the YouTube accounts producing AI-generated videos reviewed by the Times specifically target the youngest of viewers and their parents, marketing their channels as “educational” as opposed to entertainment. Creators are profiting off this content with little oversight from YouTube.

“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” said Dr. Jenny Radesky, a developmental behavioural paediatrician and associate professor of paediatrics at the University of Michigan Medical School. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”

Radesky and others raised concerns about hyper realistic AI content, especially for children who are too young to be able to distinguish fantasy from reality.

McCall Booth, a developmental psychologist and researcher at Georgetown University, said children “may have a harder time in the future identifying fake content because their mental schema had already adapted to include improbable but aesthetically realistic character actions.”

Even on YouTube Kids, which is intended to provide a more controlled digital environment for children, these kinds of AI videos are easy to find. Last summer, videos of AI-generated animals diving into pools was even a TikTok trend.

Studies show children under the age of five can’t comprehend short form content very well at all

Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, pointed out that the pool-diving videos in particular contain a lot of conflicting information for young children who may have a hard time deciphering what is real.

“The animal could be real. The pool could be real, but again, it’s a mismatch between what should happen in the real world between those two things. So that is going to place a lot of this cognitive load on the child to try and map those things together,” Barr said.

“It may seem like it’s innocuous,” she added. “But that is not going to help them learn either about swimming or giraffes or ‘G.’”

Radesky explained that well-crafted media serves as a mirror and helps reflect the world that children already know, back to them. Shows like “Mister Rogers’ Neighborhood” or “Sesame Street,” for example, intentionally try to help make sense of the world — not only through letters and numbers, but also through emotions and learning about interpersonal relationships.

The American Academy of Pediatrics issued a guide for parents on how to select media content for their young children, telling parents to avoid content that is either AI-generated or highly sensationalized. The guidance also cautioned against consuming short-form videos.

While there aren’t many studies yet on how short-form media affects young children, Barr said that for children under the age of 5 whose attention systems are still developing, the videos move too rapidly, and usually aren’t long enough to include any meaningful context or story plot.

The Times focused primarily on YouTube Shorts when conducting its analysis of AI videos, as most AI tools default to short-form video and offer vertical formatting options.

Over the course of several weeks, the Times watched videos from popular children’s channels on YouTube like CoComelon, “Bluey” or Ms. Rachel from a private browser at different times throughout the day.

Then we scrolled through the platform’s recommended YouTube Short videos in 15-minute intervals in order to better understand how the algorithm floods the feed with this content.

Bluey
Bluey Credit: ABC/Ludo

In one 15-minute session, after watching CoComelon’s “Wheels on the Bus” video, more than 40 per cent of the videos watched appeared to contain AI-generated visuals. The Times manually reviewed each of the videos, some of which clearly featured YouTube’s label for “altered or synthetic content,” while others displayed visual errors or other distortions in the background.

The AI-generated content wasn’t always obviously flawed, and some videos were sufficiently seamless to evade casual detection by the human eye. To further vet the videos, the Times used an AI detector to determine with high probability that the videos, and in some cases the music and voices, were AI-generated.

Some platforms have begun to tighten their rules around the use of these tools. Pinterest has features that allow users to select how much of this kind of content they want to see. TikTok also said it was testing ways that would enable people to reduce the amount of AI content in their feeds. Last month, YouTube announced new controls that allow parents to set time limits on YouTube Shorts.

The Times requested comment from YouTube on its policy around AI videos for children, and shared five channels as examples. In response, YouTube suspended all five accounts from the YouTube Partner Program, meaning they are ineligible to earn ad revenue on YouTube and are blocked from appearing on YouTube Kids. The Times also sent three examples of hyper realistic AI videos on YouTube Kids, which YouTube then removed from the app.

YouTube also stated that it removed one video the Times shared for violating child safety policies. The AI video showed animals being chased and turning different colors once inoculated with a syringe. However, similar videos can still be found on the channel.

“We require creators to disclose when they’ve used AI to create realistic content, meaning things a viewer could easily mistake for a real person, place, or event,” Boot Bullwinkle, a YouTube spokesperson, said in an email to the Times.

But the Times’ review found that creators are not consistently disclosing if videos contain synthetic visuals to make more realistic-looking content. And when it comes to animated AI videos for children, YouTube does not require these to be labeled at all.

This means that much of the burden of identifying AI content is falling to parents — a task that is daunting even for experts as the tools that make this content are rapidly improving.

Some parents have turned to Reddit looking for tips to filter out AI videos on YouTube. Other commentators on the platform advise fellow parents to create their own playlist of vetted content, while some parents are arguing to boycott the platform altogether.

Allison Sims, 34, has two children and lives in Texas. She often turns on her own YouTube account to keep her 2-year-old occupied while she’s making dinner. Her daughter watches Ms. Rachel, The Wiggles and other channels that play nursery rhymes. But it wasn’t long before she figured out how to scroll through YouTube Shorts.

After coming across several shorts that she found disturbing in her daughter’s watch history, Sims said she removed the app from the iPad. She shared some of the videos her daughter watched with the Times, which included AI-generated videos.

“Because AI is so new and as a parent, I wouldn’t know what to look out for except for when they’re very obvious that I stop and look at it,” Sims said. “But I feel like it’s something that as parents we should kind of know and be aware of.”

Sims also questioned the motive of the creators behind the videos. “Is it that they’re actually wanting to help or is it they’re trying to grab your kids’ attention?”

Many of the YouTube accounts uploading AI content for children are largely anonymous with no contact information or identifiable details as to who is behind the account.

But one creator, Syeda Jaria Hassan, spoke to the Times and explained how she taught herself how to make AI videos using tools like Google’s Whisk and Runway. She said that creating AI content for children has become her full-time job.

Hassan, who lives in the city of Sargodha in Punjab, Pakistan, said she decided to focus on making content for children after teaching at a Montessori school for children between 4 and 8. Her account, Suno Kids TV, which is described as a channel to educate and entertain children, features animated AI videos of animals and sing-along songs.

When asked about how children can be distracted by these kinds of effects, Hassan responded that TV channels and other YouTube channels for children also rely heavily on visual effects and that she’s just following a model of children’s programming that has been around for years.

However, when it comes to learning, experts say children benefit most from watching media that has a clear narrative with a beginning, middle and end, along with characters that children can attach to and scenes that relate to their real life.

Barr noted that storybooks and other well-structured content aligns with a familiar format, which is following a character throughout a journey. Media that illustrates relatable scenes, like going to the park, ultimately help children understand and connect back to their own world.

Simple language and short phrases are also helpful when it comes to cognitive development. Programming that teaches children about concepts like problem-solving or feature intentional repetition can help with memory recall.

One example is PBS Kids’ “Daniel Tiger’s Neighborhood,” a modern spinoff of “Mister Rogers’ Neighborhood,” which follows a young animated tiger who teaches life skills and social strategies. The show works with child development experts when crafting stories.

Ellen Doherty, chief creative officer at Fred Rogers Productions, explained that they developed a structural pattern for the show, specifically in the format of two separate short stories in every episode with songs that strategically help reinforce the themes within the episode that parents and children can both sing and remember. This music also helps move the story along, but at a controlled speed.

“Everything happens in a pace that a young child who does not have cinematic language yet can follow and can actually literally process what’s happening,” Doherty said.

In one story, Daniel Tiger teaches children about brushing their teeth through song, making sure to interact with young viewers and taking long pauses.

“That spark of human connection is everything,” Doherty said.

This article originally appeared in The New York Times.

© 2026 The New York Times Company

Originally published on The New York Times

Comments

Latest Edition

The Nightly cover for 19-03-2026

Latest Edition

Edition Edition 19 March 202619 March 2026

Prove yourself: Chalmers’ Budget opportunity comes once in a lifetime.