YouTube blasted by hundreds of experts over ‘AI slop’ videos served up to kids
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube s parent company Google, children s advocacy group Fairplay expresses serious concern about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. This AI slop harms children s development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy developme
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children.
In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube’s parent company Google, children’s advocacy group Fairplay expresses “serious concern” about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators.
“This ‘AI slop’ harms children’s development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development,” the letter reads. “These harms are particularly acute for young children.” The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it.
The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like “The Anxious Generation” author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition.
Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of ” brainrot.”
Spokesperson Boot Bullwinkle said in a statement that YouTube has “high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels.”
“We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content,” Bullwinkle said. “We’re always evolving our approach to stay current as the ecosystem evolves.”
YouTube’s current policy regarding AI-generated content requires creators to disclose when content that’s “realistic” is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects.
YouTube said it is actively working on developing labels for YouTube Kids.
In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an “extremely limited” definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children “to fend for themselves or their parents to play whack-a-mole,” the letter reads.
Fairplay’s campaign comes shortly after Google’s AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg.
The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case.
“Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children’s time online — including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction,” said Rachel Franz, the director of Fairplay’s Young Children Thrive Offline program, in a statement. “What’s more, YouTube’s algorithm makes it impossible for kids to avoid AI slop.”
Earlier this year, YouTube head Mohan listed out “managing AI slop” as one of the company’s priorities for 2026. In a January blog post, he wrote that the company was “actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content.”
The extended deadline for Fast Company's Best Workplaces for Innovators is Friday, April 3, at 11:59 p.m. PT. Apply today.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
platformcompanymillion
Are Benchmark Tests Strong Enough? Mutation-Guided Diagnosis and Augmentation of Regression Suites
arXiv:2604.01518v1 Announce Type: new Abstract: Benchmarks driven by test suites, notably SWE-bench, have become the de facto standard for measuring the effectiveness of automated issue-resolution agents: a generated patch is accepted whenever it passes the accompanying regression tests. In practice, however, insufficiently strong test suites can admit plausible yet semantically incorrect patches, inflating reported success rates. We introduce STING, a framework for targeted test augmentation that uses semantically altered program variants as diagnostic stressors to uncover and repair weaknesses in benchmark regression suites. Variants of the ground-truth patch that still pass the existing tests reveal under-constrained behaviors; these gaps then guide the generation of focused regression

ToolMisuseBench: An Offline Deterministic Benchmark for Tool Misuse and Recovery in Agentic Systems
arXiv:2604.01508v1 Announce Type: new Abstract: Tool using agents often fail for operational reasons even when language understanding is strong. Common causes include invalid arguments, interface drift, weak recovery, and inefficient retry behavior. We introduce ToolMisuseBench, an offline deterministic benchmark for evaluating tool misuse and recovery under explicit step, call, and retry budgets. The benchmark covers CRUD, retrieval, file, and scheduling environments with replayable fault injection. It reports success, invalid call behavior, policy violations, recovery quality, and budgeted efficiency. We release a public dataset with 6800 tasks and a reproducible evaluation pipeline. Baseline results show fault specific recovery gains for schema aware methods, while overall success remai
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

RIFT: Entropy-Optimised Fractional Wavelet Constellations for Ideal Time-Frequency Estimation
arXiv:2501.15764v3 Announce Type: replace Abstract: We introduce a new method for estimating the Ideal Time-Frequency Representation (ITFR) of complex nonstationary signals. The Reconstructive Ideal Fractional Transform (RIFT) computes a constellation of Continuous Fractional Wavelet Transforms (CFWTs) aligned to different local time-frequency curvatures. This constellation is combined into a single optimised time-frequency energy representation via a localised entropy-based sparsity measure, designed to resolve auto-terms and attenuate cross-terms. Finally, a positivity-constrained Lucy-Richardson deconvolution with total-variation regularisation is applied to estimate the ITFR, achieving auto-term resolution comparable to that of the Wigner-Ville Distribution (WVD), yielding the high-res

EpiDroid: Dependency-Guided Recomposition for Deep State Discovery in Mobile GUI Testing
arXiv:2604.01522v1 Announce Type: new Abstract: The increasing scale and complexity of mobile applications make automated GUI exploration essential for software quality assurance. However, existing methods often neglect state dependencies between test fragments, which leads to redundant exploration and prevents access to deep application states. We introduce EpiDroid, a black-box, pluggable framework that augments existing explorers through semantic state dependency awareness. EpiDroid distills raw traces into stable test fragments to extract underlying dependencies. It then employs a Recomposition-Replay paradigm to perform impact reasoning via LLM and deterministic replay on high-value mutable state elements. Through iterative feedback, EpiDroid refines the state-dependency graph to syst

Are Benchmark Tests Strong Enough? Mutation-Guided Diagnosis and Augmentation of Regression Suites
arXiv:2604.01518v1 Announce Type: new Abstract: Benchmarks driven by test suites, notably SWE-bench, have become the de facto standard for measuring the effectiveness of automated issue-resolution agents: a generated patch is accepted whenever it passes the accompanying regression tests. In practice, however, insufficiently strong test suites can admit plausible yet semantically incorrect patches, inflating reported success rates. We introduce STING, a framework for targeted test augmentation that uses semantically altered program variants as diagnostic stressors to uncover and repair weaknesses in benchmark regression suites. Variants of the ground-truth patch that still pass the existing tests reveal under-constrained behaviors; these gaps then guide the generation of focused regression

Many Wrongs Make a Right: Leveraging Biased Simulations Towards Unbiased Parameter Inference
arXiv:2604.02219v1 Announce Type: cross Abstract: In particle physics, as in many areas of science, parameter inference relies on simulations to bridge the gap between theory and experiment. Recent developments in simulation-based inference have boosted the sensitivity of analyses; however, biases induced by simulation-data mismodeling can be difficult to control within standard inference pipelines. In this work, we propose a Template-Adapted Mixture Model to confront this problem in the context of signal fraction estimation: inferring the population proportion of signal in a mixed sample of signal and background, both of which follow arbitrarily complex distributions. We harness many biased simulations to perform data-driven estimates of each process distribution in the signal region, sub



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!