John Arnold - China, Energy Markets and Fixing America's Systems - [Invest Like the Best, EP.461]
My guest today is John Arnold. John is probably the most famous energy trader of all time and certainly the most successful. One of the things John talks about is cultivating the best seat in your industry – the seat with the best perspective, the most information, the best systems.. John has been closely watching China's convergence in robotics, AI, and EVs, and shares his perspective from his recent trip to the country. We talk about the state of energy markets today – the misaligned goals and incentives, the NIMBYism that prevents building in America, and what he actually thinks about the wave of nuclear energy startups that everyone seems excited about. John is also one of the most innovative philanthropists working today, applying that same analytical rigor to diagnosing structural fa
Introduction
PatrickMy guest today is John Arnold. John is probably the most famous energy trader of all time and certainly the most successful. One of the things John says was that he wanted to cultivate and build the best seat in his industry – the seat with the best perspective, with the most information, with the best systems.
What's most interesting about John is after being the most successful energy trader of all time, you could argue that he's gone on to be the most innovative philanthropist as well. John has applied this idea of philanthropy to all different sectors, and what's so exciting about this conversation is it feels like you're talking to a talented entrepreneur or a talented operator in all of these different fields, who's willing to share exactly what him and his team have learned about what makes certain problems manifest across our country.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelproductapplication
Adaptive Domain Models: Bayesian Evolution, Warm Rotation, and Principled Training for Geometric and Neuromorphic AI
arXiv:2603.18104v2 Announce Type: replace-cross Abstract: Prevailing AI training infrastructure assumes reverse-mode automatic differentiation over IEEE-754 arithmetic. The memory overhead of training relative to inference, optimizer complexity, and structural degradation of geometric properties through training are consequences of this arithmetic substrate. This paper develops an alternative training architecture grounded in three prior results: the Dimensional Type System and Deterministic Memory Management framework [6], which establishes stack-eligible gradient allocation and exact quire accumulation as design-time verifiable properties; the Program Hypergraph [8], which establishes grade preservation through geometric algebra computations as a type-level invariant; and the b-posit 202

Threshold Modulation for Online Test-Time Adaptation of Spiking Neural Networks
arXiv:2505.05375v3 Announce Type: replace-cross Abstract: Recently, spiking neural networks (SNNs), deployed on neuromorphic chips, provide highly efficient solutions on edge devices in different scenarios. However, their ability to adapt to distribution shifts after deployment has become a crucial challenge. Online test-time adaptation (OTTA) offers a promising solution by enabling models to dynamically adjust to new data distributions without requiring source data or labeled target samples. Nevertheless, existing OTTA methods are largely designed for traditional artificial neural networks and are not well-suited for SNNs. To address this gap, we propose a low-power, neuromorphic chip-friendly online test-time adaptation framework, aiming to enhance model generalization under distribution
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

A Survey on Robust Deep Joint Source-Channel Coding for Semantic Communications
arXiv:2604.04413v1 Announce Type: new Abstract: Semantic communications (SCs) aim to transmit only the essential information required to perform given tasks, thereby improving communication efficiency. Deep learning-based joint source-channel coding (deep JSCC) has emerged as a promising approach for SC systems; however, its performance often degrades when the deployment channels differ from the training channel conditions, making robustness a critical requirement. This paper presents a structured overview of recent methodologies for enhancing the robustness of deep JSCC. Specifically, existing approaches are categorized into two classes: robust training approaches and adaptive approaches, with the latter further divided into adaptive semantic feature selection, physical-layer adaptation,

Your AI Coding Agent Isn’t a Team Member. It’s Five of Them.
Most teams using Claude Code are doing it wrong. They treat the AI like a single, brilliant intern — toss it a task, review the output, fix the mess, repeat. It works, sort of. But it’s like hiring a concert pianist and asking them to only play chopsticks. The real power isn’t in having one agent do everything. It’s in making the agent switch roles at precisely the right moment in your development lifecycle. Garry Tan — Y Combinator’s CEO, former early engineer at Palantir — recently open-sourced his Claude Code setup and shared the numbers: 10,000 lines of code and 100 pull requests per week over a 50-day stretch. Andrej Karpathy told the No Priors podcast in March 2026 that he hasn’t typed a line of code since December. Peter Steinberger built OpenClaw — 247K GitHub stars — essentially s

Your Claude Code is Starving, the Food’s Scattered All Over Your Org, and Some of it is Stale
How to build the context layer that spec-driven development assumes already exists and keeping it fresh Spec-driven development is the right instinct. Write precise intent before you ask an agent to act on it. Define the acceptance criteria, the constraints, and the architectural boundaries. Give the agent a contract rather than a vague description and watch the output improve. There is a problem with this, and it sits one layer below the spec. Consider a well-written engineering ticket. Specific file paths. Exact property names. Explicit “do not create this” constraints. A clear definition of done. A ticket that any experienced developer on the team would read and immediately understand. Hand that ticket to an AI agent and ask it to implement the feature. The agent implements something. I

AI Agents Are Calling Restaurants. Restaurants Can’t Talk Back.
A few months ago, a developer at a tech conference described an experiment gone sideways. He’d set two AI agents loose on a problem overnight. By morning, the problem was solved — but the agents were still going. They’d gotten stuck in an infinite loop of politeness, endlessly thanking each other for their contributions. Hours of compute, burned on robotic small talk. It was funny. It was also a preview of something bigger. Because AI agents aren’t just talking to each other anymore. They’re talking to us. They’re calling restaurants during the dinner rush. They’re browsing reservation platforms and snapping up tables. Google’s AI Mode already lets you say “find me a dinner reservation for three people this Friday, craving ramen” — and it searches across OpenTable, Resy, and Tock in real t

![John Arnold - China, Energy Markets and Fixing America's Systems - [Invest Like the Best, EP.461]](https://colossus.com/wp-content/uploads/2026/03/John-Arnold-ramp-scaled.jpg)

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!