GPT reasoning models have "line of sight" to AGI, says OpenAI s Greg Brockman
OpenAI co-founder Greg Brockman says the debate about whether text-based models can achieve general intelligence is settled. The GPT architecture will lead to AGI. The article GPT reasoning models have "line of sight" to AGI, says OpenAI s Greg Brockman appeared first on The Decoder .
OpenAI co-founder Greg Brockman says the debate about whether large language models can achieve general intelligence (AGI) is settled. The GPT architecture will lead to AGI.
"I think that we have definitively answered that question—it is going to go to AGI. Like we see line of sight," OpenAI President Greg Brockman says about the GPT reasoning models in the Big Technology Podcast.
It's a bold claim. Brockman is essentially declaring one of the central open questions in AI research settled: Can models primarily trained on text develop a real understanding of the world? Or does that require multimodal world models like Sora? Among AI researchers, the technical approach remains hotly debated.
OpenAI recently shut down the Sora app and model. World model research will continue for robotics, but on a smaller scale and without consumer-facing products.
Brockman calls Sora an "incredible model," but says it sits on "a different branch of the tech tree" than the GPT reasoning series. With limited computing power, pursuing both at the same time isn't feasible for OpenAI. For Brockman, it's less about the relative importance of the two approaches and more about "sequencing and timing." The applications "we've always dreamed of are starting to come into reach," and the way to get there is through the GPT architecture.
When host Alex Kantrowitz asks whether OpenAI could be missing something crucial by skipping Sora-style world models, pointing out that Deepmind's Demis Hassabis had said Google's "Nano Banana" image model felt particularly close to AGI, Brockman acknowledges the risk: "In this field you do have to make choices. Right? You have to make a bet."
Researchers remain divided on whether LLMs can reach general intelligence
Whether purely text-based models can achieve general intelligence is far from settled in the broader AI research community. Renowned AI researcher Yann LeCun has argued for years that LLMs won't lead to human-like intelligence. In his view, LLMs have a very limited understanding of logic, don't understand the physical world, have no permanent memory, cannot think rationally, and cannot plan hierarchically. Instead, he's betting on so-called world models to develop a comprehensive understanding of the environment. Deepmind founder Demis Hassabis holds a similar view: LLM scaling alone isn't enough, and further breakthroughs are needed.
AI researcher Francois Chollet defines intelligence as the ability to learn new skills efficiently. What matters is how well a system can independently form abstractions. While current language models can be placed on an intelligence scale, they rank very low. Outside their training domain, they have to relearn everything from scratch. Continuous learning could help address this gap.
This view aligns with a broader line of research. In a recent paper, Deepmind researcher Richard Sutton and former Deepmind researcher David Silver called for a paradigm shift. Instead of training on human knowledge, systems should learn from their own experience. Silver has since founded his own startup focused on simulation learning.
Ex-OpenAI researcher Jerry Tworek, one of the key minds behind OpenAI's reasoning model breakthroughs, also describes his research field of deep learning as "done." The next step, he says, is building simulations of human work where AI systems can learn skills. His new startup, Core Automation, is dedicated to this approach.
Not everyone shares this skepticism, though. Deepmind researcher Adam Brown recently defended the potential of the current LLM architecture. He compares the token prediction mechanism to biological evolution: a simple rule that, through massive scaling, creates emergent complexity that people perceive as understanding. Brown argues this complexity could even lead to consciousness.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now
The Decoder
https://the-decoder.com/gpt-reasoning-models-have-line-of-sight-to-agi-says-openais-greg-brockman/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelreasoning
AI models fail at robot control without human-designed building blocks but agentic scaffolding closes the gap
A new framework from Nvidia, UC Berkeley, and Stanford systematically tests how well AI models can control robots through code. The findings: without human-designed abstractions, even top models fail, but methods like targeted test-time compute scaling closes the gap. The article AI models fail at robot control without human-designed building blocks but agentic scaffolding closes the gap appeared first on The Decoder .

Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
b8640
tests : add unit test coverage for llama_tensor_get_type ( #20112 ) Add unit test coverage for llama_tensor_get_type Fix merge conflicts, add more schemas clang formatter changes Trailing whitespace Update name Start rebase Updating files with upstream changes prior to rebase Changes needed from rebase Update attn_qkv schema, change throw behaviour Fix merge conflicts White space Update with latest changes to state counters Revert accidental personal CLAUDE.md changes Change quotation mark Reuse metadata.name since we have it Move test-only stuff out of llama-quant.cpp Hide the regex functionality back in llama-quant.cpp, use a unique pointer to a new struct 'compiled_tensor_type_patterns' which contains the patterns cont : inital deslop guidelines Cleanup based on review comments Continue

Nvidia sets new MLPerf records with 288 GPUs while AMD and Intel focus on different battles
The latest round of the industry's top inference benchmark introduces multimodal and video models for the first time. Nvidia, AMD, and Intel each highlight different metrics, making direct comparisons difficult. The article Nvidia sets new MLPerf records with 288 GPUs while AMD and Intel focus on different battles appeared first on The Decoder .

Microsoft shivs OpenAI with three new AI models for speech and images
About that partnership... Microsoft on Thursday unveiled public preview versions of three home-baked machine learning models focused on speech recognition, speech synthesis, and image generation.…


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!