The Guardian view on the BBC’s future: who decides what news means? | Editorial
<p>AI is interpreting journalism without regard for truth. The BBC must build the capacity to ensure its reporting is understood on its own terms</p><p>Appointing Matt Brittin, a former Google executive, as BBC director general is smarter than critics admit. Although he was on the board of the Guardian’s publisher, Mr Brittin was no journalist. He does understand platforms, scale and digital audiences.</p><p>Director generals come under scrutiny when crises hit, like this week’s sacking of <a href="https://www.theguardian.com/media/2026/mar/31/scott-mills-sacked-questioned-police-sexual-offence-allegations-2016-bbc-radio-2">Scott Mills</a> over his “personal conduct”. It then emerged that police <a href="https://www.bbc.co.uk/news/articles/cwywvrye76lo">previously questioned </a>the Radio
Appointing Matt Brittin, a former Google executive, as BBC director general is smarter than critics admit. Although he was on the board of the Guardian’s publisher, Mr Brittin was no journalist. He does understand platforms, scale and digital audiences.
Director generals come under scrutiny when crises hit, like this week’s sacking of Scott Mills over his “personal conduct”. It then emerged that police previously questioned the Radio 2 DJ over separate allegations, of serious sexual offences, closing the case due to lack of evidence. But the role’s underlying challenge is facing future threats to the corporation’s audience.
On one measure, YouTube reaches more Britons than the BBC’s channels combined. But hovering into view is AI, which has facilitated misinformation, error and ignorance. It is already beginning to mediate the news – and how it is understood. Ofcom says about 30% of searches display AI summaries, seen regularly by more than half of adults. The BBC has tried, for good reasons, to stop its journalism being extracted by AI without payment. But it risks excluding itself from a technology where many now get information. The Reuters Institute found only about 6% of users turn to AI for news. But as summaries embed in search, journalism becomes raw material, not the finished product.
A 2025 paper by Kai-Cheng Yang of Binghamton University reveals the implications. It shows that AI-generated answers draw on a narrow band of sources: OpenAI models rest on wire services; Google’s on search-driven global media; Perplexity on respected brands such as the BBC. The same question produces a different response depending on the system used. Despite the BBC being the UK’s most trusted news source, only two of four AI tools drew on its content, according to a study by the IPPR thinktank. The UK’s most popular AI tool – OpenAI’s ChatGPT – cited GB News more often. ChatGPT’s top citations often align with OpenAI’s publisher deals (including the Guardian’s). The lack of transparency around how AI’s sources are selected and weighted is problematic.
Audiences once chose between narratives. Social media made them navigate – or trapped them in filter bubbles. Now AI distils a single response. Nuance and plurality are at risk. Journalists have traditionally judged what information to use and which sources to prioritise. Their mental models were built up through reporting. AI systems perform those functions through hidden algorithms, privileging what is most common, not what is most true.
Control lies not just in owning information, but in how it is structured, modelled and understood. The IPPR rightly argues that the UK must combine transparency over how AI answers are generated, fair licensing frameworks to ensure publishers are paid and intervention to curb platform dominance over information. Public service media – especially the BBC – should anchor this strategy. Impartial, accurate news is essential for democratic stability.
The BBC’s charter review must secure funding and end the cycle of “existential” resets with a permanent settlement protecting its independence. The BBC has the scale, data and mandate to underpin a trustworthy “orchestration” layer for news. Its journalism must be machine-readable, queryable and interpretable on its own terms. Letting companies like Palantir, co-founded by the Trump-backing billionaire Peter Thiel, do this would be a mistake. The BBC has traditionally fused innovation with public purpose. It must do so again – and ensure news remains contestable, transparent and accountable.
- Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.
The Guardian AI
https://www.theguardian.com/commentisfree/2026/apr/01/the-guardian-view-on-the-bbcs-future-who-decides-what-news-meansSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
platformreportWorker Discretion Advised: Co-designing Risk Disclosure in Crowdsourced Responsible AI (RAI) Content Work
arXiv:2509.12140v3 Announce Type: replace Abstract: Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted individual co-design sessions with 15 task designers, 11 crowdworkers, and 3 platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform representatives envis
Explaining the Reputational Risks of AI-Mediated Communication: Messages labeled as AI-assisted are viewed as less diagnostic of the sender's moral character
arXiv:2509.09645v2 Announce Type: replace Abstract: When someone sends us a thoughtful message, we naturally form judgments about their character. But what happens when that message carries a label indicating it was written with the help of AI? This paper investigates how the appearance of AI assistance affects our perceptions of message senders. Adding nuance to previous research, through two studies (N=399) featuring vignette scenarios, we find that AI-assistance labels don't necessarily make people view senders negatively. Rather, they dampen the strength of character signals in communication. We show that when someone sends a warmth-signalling message (like thanking or apologizing) without AI help, people more strongly categorize the sender as warm. At the same time, when someone sends
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
A wearable haptic device for edge and surface simulation
arXiv:2604.00752v1 Announce Type: cross Abstract: Object manipulation is fundamental to virtual reality (VR) applications, yet conventional fingertip haptic devices fail to render certain tactile features relevant for immersive and precise interactions, as i.e. detection of edges. This paper presents a compact, lightweight fingertip haptic device (24.3 g) that delivers distinguishable surface and edge contact feedback through a novel dual-motor mechanism. Pressure distribution characterization using a 6 x 6 flexible sensor array demonstrates distinct contact patterns between the two stimulation modes. A preliminary user study with five participants achieved 93% average classification accuracy across four conditions (edge/surface contact with light/heavy pressure), with mean response times
A column generation algorithm for finding co-3-plexes in chordal graphs
arXiv:2604.00721v1 Announce Type: new Abstract: In this study, we tackle the problem of finding a maximum \emph{co-3-plex}, which is a subset of vertices of an input graph, inducing a subgraph of maximum degree 2. We focus on the class of chordal graphs. By observing that the graph induced by a co-3-plex in a chordal graph is a set of isolated triangles and induced paths, we reduce the problem of finding a maximum weight co-3-plex in a graph $G$ to that of finding a maximum stable set in an auxiliary graph $\mathcal{A}(G)$ of exponential size. This reduction allows us to derive an exponential variable-sized linear programming formulation for the maximum weighted co-3-plex problem. We show that the pricing subproblem of this formulation reduces to solving a maximum vertex and edge weight in



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!