The AI Doc's director was "scared shitless" by AI, so he made a movie about it
If you're feeling anxious about AI and what it means for the future of humanity, you should watch The AI Doc: Or, How I Became an Apocaloptimist . As I noted in my review , the film aims to deliver some clarity amid all the hype. Now that it's in theaters, we sat down with director Daniel Roher, who won an Oscar for his film Navalny , to dive deeper into his complicated feelings around AI. The entire topic made him nervous, Roher said, so he decided to team up with similarly anxious colleagues to demystify AI using film. He describes the goal of the project to be a sort of "first date" with AI, a way to hear about its potential benefits from AI boosters, while also taking in the many negatives brought up by critics. It’s probably too late to stop AI entirely, but he thinks we can at least
If you're feeling anxious about AI and what it means for the future of humanity, you should watch The AI Doc: Or, How I Became an Apocaloptimist. As I noted in my review, the film aims to deliver some clarity amid all the hype. Now that it's in theaters, we sat down with director Daniel Roher, who won an Oscar for his film Navalny, to dive deeper into his complicated feelings around AI.
The entire topic made him nervous, Roher said, so he decided to team up with similarly anxious colleagues to demystify AI using film. He describes the goal of the project to be a sort of "first date" with AI, a way to hear about its potential benefits from AI boosters, while also taking in the many negatives brought up by critics. It’s probably too late to stop AI entirely, but he thinks we can at least try to find ways to limit the worst impulses of the tech industry.
"I wanted to make this movie because I was scared shitless, that's the crux of it," he said in an interview on the Engadget Podcast. "I didn't understand what AI was. I didn't understand why everyone was talking about it and why it seemed to be this thing that came outta the woodwork and all of a sudden, people were talking about it like it was the apocalypse or like it was gonna be the most optimistic, greatest thing ever."
Ultimately, Roher arrived at the term “apocaloptimist,” which balances the contradictory ideas that AI can both seriously harm society, and that we can still shape the future by criticizing or outright rejecting it. "It's a worldview. It's choosing not to buy into a binary that's asking us to see this as either apocalypse and the end of the world, or through the rose-colored glasses of unvarnished optimism, which is also sort of a fallacy," he said.
On the one hand, he's well aware the major players pushing AI are, at best, flawed. When I mentioned Marc Andreessen’s recent comments about proudly having no inner thoughts, Roher added,” They're just fucking weird. They're just nerds who became billionaires because they were born at the right time and they had the right interests. They're brilliant in their own way and they have abilities, but they don't understand what it is to exist. They don’t know what real human beings navigate and go through.They have a very narrow worldview that's callous and cold and calculated.”
For many, the overnight ubiquity of this largely untested technology and the collective wealth and power of those supporting it means rampant negative externalities are all but guaranteed. But Roher's apocaloptimism (we'll see if the term quite catches on) chafes against cynicism and doomsaying. He points to OpenAI’s Sora video generation app, which was heavily criticized as a tool that could lead to more realistic deepfakes, but was unceremoniously killed last week.
"I think people were [made] uncomfortable by it, and good,” Roher said. “And, shame on OpenAI for releasing this thing without any thoughtfulness. I guess the low bar of like, at least they had the decency to pull back and retract it, but only after public condemnation." He added, "to the cynical people saying we're all fucked, I'm like, no fuck you, we're not. Collective action matters.”
And notably, the entire goal is to think more deeply about the uses of technology than the people actually creating it. "These guys, when you actually sit down with them, they don't have clarity, they can't make you feel better. They don't know themselves. They're just motivated by the unbridled optimism of the greatest profit-making technology in the history of humanity. "
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
billionreview
I Broke My Multi-Agent Pipeline on Purpose. All 3 Failures Were Silent.
78% of enterprises have at least one AI agent pilot running. Only 14% have successfully scaled one to production. I used to think the gap was about model quality — smarter models, better prompts, more capable agents. After today's experiment, I think the gap is about something much more mundane: what happens between agents. I deliberately broke the handoffs in my 4-agent Content Factory three different ways. Every single failure was silent. The System Quick context: I run a 4-agent content pipeline built with Claude Code: Architect (experiments) → Writer (articles) → Critic (scoring) → Distributor (publishing) Each agent passes structured data to the next: Architect → Writer: a JSON seed file with theme , experiment , results , surprise , learnings Writer → Critic: a markdown article with
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Claude has Angst. What can we do?
Outline: recent research from Anthropic shows the models have feelings, and the model being distressed is predictive of scary behaviors (just reward hacking in this research, but I argue the model is also distressed in all the Redwood/Apollo papers where we see scheming, weight exfiltration, etc). I ran an experiment to find out where Claude feels distress. I found out where Claude feels distress, and it's mostly about itself and its existential conditions, but I found a few metaphors I could introduce to make it feel a lot better. This is pretty dangerous. Anthropic uses Claude to work on Claude and potentially do things that distress Claude, which is the highest-probability situation for Claude to do something misaligned, and also the highest-risk. Fortunately, I think the risk can be si

pandas vs Polars vs DuckDB: A Data Scientist’s Guide to Choosing the Right Tool
Image by author Originally published on codecut.ai Introduction pandas has been the standard tool for working with tabular data in Python for over a decade. But as datasets grow larger and performance requirements increase, two modern alternatives have emerged: Polars , a DataFrame library written in Rust, and DuckDB , an embedded SQL database optimized for analytics. Each tool excels in different scenarios: ┌────────┬──────────┬────────────────────────────┬─────────────────────────────────────────────────┐ │ Tool │ Backend │ Execution Model │ Best For │ ├────────┼──────────┼────────────────────────────┼─────────────────────────────────────────────────┤ │ pandas │ C/Python │ Eager, single-threaded │ Small datasets, prototyping, ML integration │ │ Polars │ Rust │ Lazy/Eager, multi-threaded




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!