OpenAI decides the best way to fight critical AI coverage is to own a newsroom
OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI's communications department. That's as contradictory as it sounds. So what's OpenAI really after? The article OpenAI decides the best way to fight critical AI coverage is to own a newsroom appeared first on The Decoder .
OpenAI has acquired the online talk show TBPN, which has been covering tech news and interviewing industry leaders since October 2024.
The purchase price was not disclosed. TBPN broadcasts daily and focuses on interviews with big names in tech. Episodes typically run between 20 and 60 minutes and pull low four- to five-digit view counts on YouTube.
According to the Wall Street Journal, the show averages around 70,000 viewers per episode "across various online platforms" and brought in roughly 5 million dollars in ad revenue in 2025. The eleven-person team expects revenue to top 30 million dollars in 2026.
TBPN will be "editorially independent," but report to OpenAI's comms chief
According to OpenAI's Head of Applications Fidji Simo, TBPN will remain editorially independent while also helping with marketing and communications. The show's ad business will be shut down. TBPN will report to OpenAI's head of communications, Chris Lehane, but keep control over programming, guests, and content.
The promises contradict each other. A media outlet that reports to a company's comms department isn't independent, no matter how much editorial freedom it's initially granted. OpenAI pays the salaries, sets the structure, and can replace staff at any time. You can't promise integration into corporate comms and free reporting at the same time.
Simo justifies the acquisition by saying the usual communications playbook doesn't fit OpenAI and that they want to foster a "real, constructive conversation" about AI. But OpenAI could do that as a sponsor, or even with similar formats of its own. Instead, they're buying the whole thing outright.
OpenAI—a company that draws more public criticism than arguably any other AI player—is probably the worst possible owner for a media outlet that claims to operate independently. "I don't expect them to go any easier on us, am sure I'll do my part to help enable that with occasional stupid decisions," writes OpenAI CEO Sam Altman.
It seems self-deprecating, but it doesn't answer the obvious question: if OpenAI doesn't want to influence coverage, why buy the show? It's not about the money; the ad revenue is being cut, and it wouldn't even be pocket change for OpenAI anyway.
The real play is shaping public opinion on AI
The motivation behind the purchase is probably less about OpenAI specifically and more about AI as a whole. Media companies cover topics from all angles, opportunities, and risks alike, and often lean harder on the risks. Surveys show that while AI usage is high, public trust is low. According to the WSJ, Silicon Valley puts at least part of the blame on traditional media.
Simo's own framing backs this up. She ties her call for "a real, constructive conversation about the changes AI creates" directly to OpenAI's mission to "ensure artificial general intelligence benefits all of humanity." That says two things: this conversation isn't happening elsewhere—at least not the way OpenAI wants—and OpenAI feels it needs to step in.
Read that way, OpenAI bought a group of influencers who deliver coverage in the strangely bland style of classic US news channels to steer the public narrative on AI in its favor. Lobbying dressed up as journalism—another safe space like Joe Rogan's or Lex Fridman's podcasts, where tech giants get to share their theories on the future of humanity without any real pushback, theories that often serve their own bottom line.
Or maybe it's simpler than that: TBPN host John Coogan writes on X that he's worked with Altman for over a decade. Altman funded his first company back in 2013. Perhaps the whole deal is just a favor between old friends.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
report
Deepseek v4 will reportedly run entirely on Huawei chips in a major win for China s AI independence push
Deepseek v4 is expected to launch in the coming weeks and will run exclusively on Huawei chips. China's biggest tech companies have reportedly already ordered hundreds of thousands of units. Nvidia was shut out of early testing. The article Deepseek v4 will reportedly run entirely on Huawei chips in a major win for China s AI independence push appeared first on The Decoder .

Claude Code's Usage Limit Workaround: Switch to Previous Model with /compact
A concrete workflow to avoid Claude Code's usage limits: use the previous model version with the /compact flag set to 200k tokens for long, technical sessions. The Problem: Usage Burns Too Fast for Technical Work If you're using Claude Code for serious technical work—repo cleanup, long document rewrites, or multi-step code refactors—you've likely hit the usage limit wall. As discussed in a recent Reddit thread, developers report burning through their allocated usage "absurdly fast" even with disciplined, minimal setups. The core issue isn't casual chat; it's the need for continuity in complex tasks where resetting a session breaks the workflow. The Solution: Model Version + Context Compression The specific advice circulating among power users is a two-part configuration change: Switch to t
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!