OpenAI Has Some Catching Up to Do
<table><tr><td><img alt="Chain of Thought" src="https://d24ovhgu8s7341.cloudfront.net/uploads/publication/logo/59/small_chain_of_thought_logo.png" /></td><td></td><td><table><tr><td>by <a href="https://every.to/@danshipper" itemprop="name">Dan Shipper</a></td></tr><tr><td>in <a href="https://every.to/chain-of-thought">Chain of Thought</a></td></tr></table></td></tr></table><figure><img src="https://d24ovhgu8s7341.cloudfront.net/uploads/post/cover/3896/full_page_cover_horses_final.png"><figcaption>Midjourney / Every Illustration.</figcaption></figure><p><em>Was this newsletter forwarded to you? <u><a href="https://every.to/account" rel="noopener noreferrer" target="_blank">Sign up</a></u> to get it in your inbox.</em></p><p></p><hr class="quill-line"><p></p><p>This morning I hit my usage li
Was this newsletter forwarded to you? Sign up to get it in your inbox.
This morning I hit my usage limit on Codex, OpenAI’s competitor to Claude Code. I’m building an agent-native Markdown editor for the Every team. It’s exactly the kind of complex, detail-heavy project where Codex shines.
But this week was an exception. Most of my coding happens in Claude Code now—and I’m not alone.
On Tuesday night, we had about 20 founders over to the office for a dinner on the future of AI. I asked everyone what their daily driver AI tools were. Of the programmers, almost everyone said Claude Code with Opus 4.5. The lone holdout was Naveen Naidu—general manager of Monologue—who still prefers Codex.
A month ago, the room would have been split between Codex CLI, GPT 5.1 in Cursor, and Claude Code—with some Droid sprinkled in.
A year ago, the whole room would have been using GPT models.
This might not surprise you if you’ve been on X lately. It seems the only thing on everyone’s mind is Claude Code. This audience is obviously a narrow slice of the market, but it’s the same slice that was excited about ChatGPT when it first came out.
So, what explains Claude Code and Opus’s sudden rise in startup circles? It’s not better marketing. Sure, Anthropic has their “thinking” caps. But compared to the high-profile livestreams we’ve gotten used to for important model releases, they barely promoted Opus 4.5 at launch. Instead, it’s who they decided to build for—and how that’s shaping the direction of the whole tech industry.
How Claude Code happened
When Anthropic first released Claude Code along with Sonnet 3.7 in late February of 2025, it was a bold bet. At a time when existing code editors were firmly stuck in building AI agents crammed into a sidebar, they went terminal-first and bypassed the code editor altogether. It signaled, “We’re moving to a world where code doesn’t matter.” At the time, we wrote that while it was incredible at vibe coding new projects from scratch, it wasn’t yet good enough to work with large codebases on its own. Still, we were impressed.
OpenAI responded two months later. They launched Codex CLI in April and, in May, Codex Web—a cloud-based agent that ran in ChatGPT. Both these products did away with the code editor, but neither of them worked quite as well as Claude Code—Codex CLI didn’t have access to OpenAI’s most powerful model, and Codex Web ran in a virtual machine, a sandboxed emulation of a computer rather than your actual computer—but it seemed OpenAI had the same vision of coding as Anthropic and was closing the gap.
Create a free account to continue reading
The Only SubscriptionYou Need to Stay at the Edge of AI
The essential toolkit for those shaping the future
"This might be the best value youcan get from an AI subscription."
- Jay S.
Every Content
AI&I Podcast
Monologue
Cora
Sparkle
Spiral
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Front-row access to the future of AI
Bundle of AI software
Thanks for rating this post—join the conversation by commenting below.
Chain of Thought (Every.to)
https://every.to/chain-of-thought/openai-has-some-catching-up-to-doSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelrelease
Google strongly implies the existence of large Gemma 4 models
In the huggingface card: Increased Context Window – The small models feature a 128K context window, while the medium models support 256K. Small and medium... implying at least one large model! 124B confirmed :P submitted by /u/coder543 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!