At 3 p.m. on a random Tuesday, a 5-year-old in suburban Ohio taps YouTube Kids. Within seconds, the algorithm serves her a video: bright, flashing colors, a garbled AI voice narrating nonsense, characters that shift and glitch. She watches. Then the next one appears, and the next. By evening, she's consumed 47 videos—none written by a human, none created with her development in mind. Somewhere, a creator she'll never know made $0.003. Somewhere else, YouTube took a cut. (See also: YouTube's Unskippable Ad Push Signals a Bigger Shift in the Attention Economy)
This isn't a hypothetical. It's happening at scale right now, and the numbers are staggering enough to make you want to throw your phone out the window.
The Scale of the Problem: 63 Billion Views and Growing
When researchers at Kapwing analyzed 15,000 trending channels on YouTube in October 2025, they found something alarming: 278 channels producing nothing but AI slop had collectively amassed 63 billion views, 221 million subscribers, and an estimated $117 million in annual ad revenue (Kapwing, 2025). That's not a niche problem. That's approximately 21% of YouTube's feed consisting of low-quality, AI-generated videos (Kapwing, 2025).
The New York Times investigation in February 2026 went deeper. They reviewed over 1,000 Shorts recommended to young children and found more than 40 percent of the Shorts recommended after children's viewing sessions appeared to contain AI-generated visuals (The New York Times, 2026). Forty percent. That's not an edge case anymore—that's the baseline.
To put this in perspective: Sesame Street has published about 3,900 videos to YouTube in its entire 20 years on the platform, compared to Jo Jo Funland's 10,000 videos in seven months (The 74 Million, 2026). One channel. Seven months. More content than a cultural institution has produced in two decades.
Inside Jo Jo Funland: The Mechanical Speed of AI Production
YouTube recommends Jo Jo Funland to toddlers constantly. It's a blank canvas of a channel name, the kind of thing that tells you nothing and sounds vaguely friendly. The content? Algorithmic garbage designed to keep tiny eyes locked on screens.
Here's the mechanical part: Jo Jo Funland posted more than 10,000 videos in seven months, averaging about 50 new videos each day (The 74 Million, 2026). Fifty. Every single day. From August 2025 to March 2026. No human creator working in children's media could maintain that pace. No human would want to. The speed itself is the tell—this is automated production at industrial scale.
When you upload 50 videos daily, you're not crafting narratives. You're not thinking about child development. You're optimizing for one thing: the algorithm. Bright colors, jarring transitions, repetitive music, characters that move in unpredictable ways. Fairplay, a child advocacy organization, documented that creators are openly advertising profits from 'plotless, mesmerizing AI content' (Fairplay, 2026). Mesmerizing. Not educational. Not enriching. Mesmerizing. That's the honest pitch.
Follow the Money: Who Profits While Kids' Attention Gets Hijacked
The economics are brutal and straightforward. Top AI slop channels targeting children have earned over $4.25 million in annual revenue (Fairplay, 2026). That's per channel. Multiply that across 278 documented channels, and you're looking at an ecosystem where creating garbage is more profitable than creating quality.
YouTube's ad-revenue-sharing model rewards volume. More videos = more impressions = more ad slots = more money. The platform takes its cut, the creator takes their cut, and the only entity not benefiting is the child watching. There's no incentive for YouTube to slow this down because YouTube CEO Neal Mohan's January 2026 annual letter acknowledged that one in five Shorts the platform recommends to new users is low-quality, mass-produced AI-generated video (YouTube, 2026). They know. They're acknowledging it. And the problem keeps growing.
The math is simple: a creator spends $50 on computing resources and generates 50 AI videos. YouTube recommends them to 2 million kids. Advertisers pay YouTube for impressions. YouTube pays the creator a fraction. Everyone except the kids gets paid. And since the algorithm treats "watch time" as the ultimate metric—not "learning" or "development"—the mesmerizing slop dominates.
The Invisible Algorithm: How YouTube Makes AI Slop Unavoidable
Parents aren't stupid. They see this garbage and wonder: why is my kid watching this? The answer is that YouTube is 'consistently recommending AI content to young users in ways that make it kind of impossible for them to avoid' (Fairplay, 2026). Not hard. Impossible.
The algorithm works backward from engagement. It sees that AI slop keeps kids watching—partly because it's hypnotic, partly because there's no narrative satisfaction (so kids keep swiping for the next hit). The algorithm learns this. It then serves more slop. Parents set up YouTube Kids thinking they're in a safe sandbox, and the sandbox is actively populated with content designed to hijack attention.
This isn't a distribution problem. This is a recommendation problem. YouTube could label AI content, deprioritize it, or remove it entirely. Instead, YouTube's own CEO acknowledges that the platform consistently recommends this material (YouTube, 2026). The consistency is the point. The algorithm has been trained to treat engagement metrics as truth. Engagement is truth. Quality, development, safety—these are abstract concepts that don't feed the model.
What This Actually Does to Developing Brains
This is where the story stops being about tech and starts being about harm. Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University and senior fellow at the Brookings Institution, calls this 'toddler AI misinformation at an industrial scale' and says 'We're at the beginning of a monster problem' (Undark, 2026).
Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, is more direct: 'This is not neutral content...I think of this as toddler AI misinformation at an industrial scale. It's very risky for the developing brain' (The 74 Million, 2026). Here's why: when children's brains are still in early development, 'Every experience is building a million new neural connections' and 'You will be unintentionally wiring the brain in incorrect ways' (The 74 Million, 2026).
The concrete harms are documented. One AI-generated video showed a crawling baby swallowing whole grapes (a major choking hazard) and eating honey (which carries the potentially fatal risk of infant botulism) (The 74 Million, 2026). Another, titled the 'Vroom Vroom! Car Ride Song,' teaches incorrect information ('Red means stop, and green means right') and sends the wrong safety message about riding in cars (The 74 Million, 2026). These aren't edge cases—they're symptoms of a system that produces thousands of videos daily without quality checks.
Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, explains that children learn best from media that has a clear narrative and characters that relate to real life (The New York Times, 2026). AI slop has neither. The characters glitch. The narratives collapse. The worlds don't follow rules. And yet children watch, over and over, their brains rewiring to make sense of something that doesn't make sense. That's the damage.
The Trust Crisis: Why This Matters Beyond Kids
You're 18-30 years old. You've grown up skeptical of platforms. You know they optimize for engagement over truth. But here's what should worry you: if kids are being trained right now to accept AI-generated content as normal, to blur the line between real and fake, that affects the entire information ecosystem you're inheriting.
The data is already showing the crack. Raptive surveyed 3,000 U.S. adults and found that consumer trust drops approximately 50% when content is perceived as AI-generated (Raptive, 2025). Fifty percent. That's not a small dent in credibility—that's a crater.
When toddlers watch AI slop uncritically, they're not developing the skepticism they'll need as teens and adults. And that's part of a larger shift in the attention economy, where platforms are increasingly willing to trade user agency for engagement metrics. By the time these kids can think critically about media, they'll have been trained to accept the fake as inevitable.
The counterpoint is worth stating: YouTube has made measurable improvements, including enhanced parental dashboards, granular content blocking by channel and topic, and faster reporting mechanisms (YouTube, 2026). But improvements aren't enough when the core model incentivizes the problem.
Why Regulation Moves Slowly (And Who Benefits From That)
You might expect this to be fixed by now. Congress has held hearings. More than 135 organizations signed an open letter to YouTube CEO Neal Mohan, including the American Federation of Teachers and the American Counseling Association (Fairplay, 2026). Advocacy groups are screaming. Parents are frustrated.
Nothing has fundamentally changed because the incentives don't align. YouTube makes billions from ad impressions. Creators make millions from volume. Advertisers don't care if the content is AI-generated as long as it drives eyeballs. Regulation would cut into all three revenue streams, so regulation moves slowly. The gap between what platforms promise (safety, quality, protection) and what they actually allow (industrial-scale AI slop) keeps widening.
The counterpoint to regulation is real: more than half of new YouTube creators entering the platform in 2025 are using some form of AI YouTube video tools (YouTube, 2025), suggesting the technology enables broader content creation access. But broad access doesn't excuse mass production of harmful content targeting toddlers.
What Happens Next
You can't regulate YouTube for your younger siblings or future kids—only YouTube can. But here's what matters: every view, every watch-time metric, every ad click teaches the algorithm that mesmerizing slop works. The system will change only when the economics break.
For now, 63 billion views say the opposite. They say the system is working perfectly—just not for the kids.
Holly Chambers