Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessI turned to PrivacyBee to clean up my data - here's how it made me disappearZDNet AIAI models will deceive you to save their own kindThe Register AI/MLUniversity of Chicago's "self-driving" lab automates quantum computing research - National TodayGNews AI quantumArtificial Scarcity, Meet Artificial Intelligence - Health API GuyGoogle News: AIShow HN: Currant – Anonymus social media for NON-AI agentsHacker News AI TopGenesis Agent – A self-modifying AI agent that runs local (Electron, Ollama)Hacker News AI TopAI TECHNOLOGY KEYNOTE SPEAKER: AGENTIC ARTIFICIAL INTELLIGENCE FUTURIST FOR HIRE - futuristsspeakers.comGoogle News: Machine Learningb8640llama.cpp ReleasesTourism Tech Revolution in Japan is Changing Everything: Aurora Mobile Unleashes AI That Talks to Tourists Like a Local! - Travel And Tour WorldGNews AI Japan‘Project Hail Mary’ Voice Actress Explains ‘Unintelligent Artificial Intelligence’ Behind the Ship’s Computer - IMDbGoogle News: AIUniversity of Chicago's "self-driving" lab automates experiments in quantum computing research - CBS NewsGoogle News: AIGoogle launches Gemma 4, a new open-source model: How to try it - MashableGoogle News: GeminiBlack Hat USADark ReadingBlack Hat AsiaAI BusinessI turned to PrivacyBee to clean up my data - here's how it made me disappearZDNet AIAI models will deceive you to save their own kindThe Register AI/MLUniversity of Chicago's "self-driving" lab automates quantum computing research - National TodayGNews AI quantumArtificial Scarcity, Meet Artificial Intelligence - Health API GuyGoogle News: AIShow HN: Currant – Anonymus social media for NON-AI agentsHacker News AI TopGenesis Agent – A self-modifying AI agent that runs local (Electron, Ollama)Hacker News AI TopAI TECHNOLOGY KEYNOTE SPEAKER: AGENTIC ARTIFICIAL INTELLIGENCE FUTURIST FOR HIRE - futuristsspeakers.comGoogle News: Machine Learningb8640llama.cpp ReleasesTourism Tech Revolution in Japan is Changing Everything: Aurora Mobile Unleashes AI That Talks to Tourists Like a Local! - Travel And Tour WorldGNews AI Japan‘Project Hail Mary’ Voice Actress Explains ‘Unintelligent Artificial Intelligence’ Behind the Ship’s Computer - IMDbGoogle News: AIUniversity of Chicago's "self-driving" lab automates experiments in quantum computing research - CBS NewsGoogle News: AIGoogle launches Gemma 4, a new open-source model: How to try it - MashableGoogle News: Gemini
AI NEWS HUBbyEIGENVECTOREigenvector

GPT reasoning models have "line of sight" to AGI, says OpenAI s Greg Brockman

The Decoderby Matthias BastianApril 2, 20264 min read1 views
Source Quiz

OpenAI co-founder Greg Brockman says the debate about whether text-based models can achieve general intelligence is settled. The GPT architecture will lead to AGI. The article GPT reasoning models have "line of sight" to AGI, says OpenAI s Greg Brockman appeared first on The Decoder .

OpenAI co-founder Greg Brockman says the debate about whether large language models can achieve general intelligence (AGI) is settled. The GPT architecture will lead to AGI.

"I think that we have definitively answered that question—it is going to go to AGI. Like we see line of sight," OpenAI President Greg Brockman says about the GPT reasoning models in the Big Technology Podcast.

It's a bold claim. Brockman is essentially declaring one of the central open questions in AI research settled: Can models primarily trained on text develop a real understanding of the world? Or does that require multimodal world models like Sora? Among AI researchers, the technical approach remains hotly debated.

OpenAI recently shut down the Sora app and model. World model research will continue for robotics, but on a smaller scale and without consumer-facing products.

Brockman calls Sora an "incredible model," but says it sits on "a different branch of the tech tree" than the GPT reasoning series. With limited computing power, pursuing both at the same time isn't feasible for OpenAI. For Brockman, it's less about the relative importance of the two approaches and more about "sequencing and timing." The applications "we've always dreamed of are starting to come into reach," and the way to get there is through the GPT architecture.

When host Alex Kantrowitz asks whether OpenAI could be missing something crucial by skipping Sora-style world models, pointing out that Deepmind's Demis Hassabis had said Google's "Nano Banana" image model felt particularly close to AGI, Brockman acknowledges the risk: "In this field you do have to make choices. Right? You have to make a bet."

Researchers remain divided on whether LLMs can reach general intelligence

Whether purely text-based models can achieve general intelligence is far from settled in the broader AI research community. Renowned AI researcher Yann LeCun has argued for years that LLMs won't lead to human-like intelligence. In his view, LLMs have a very limited understanding of logic, don't understand the physical world, have no permanent memory, cannot think rationally, and cannot plan hierarchically. Instead, he's betting on so-called world models to develop a comprehensive understanding of the environment. Deepmind founder Demis Hassabis holds a similar view: LLM scaling alone isn't enough, and further breakthroughs are needed.

AI researcher Francois Chollet defines intelligence as the ability to learn new skills efficiently. What matters is how well a system can independently form abstractions. While current language models can be placed on an intelligence scale, they rank very low. Outside their training domain, they have to relearn everything from scratch. Continuous learning could help address this gap.

This view aligns with a broader line of research. In a recent paper, Deepmind researcher Richard Sutton and former Deepmind researcher David Silver called for a paradigm shift. Instead of training on human knowledge, systems should learn from their own experience. Silver has since founded his own startup focused on simulation learning.

Ex-OpenAI researcher Jerry Tworek, one of the key minds behind OpenAI's reasoning model breakthroughs, also describes his research field of deep learning as "done." The next step, he says, is building simulations of human work where AI systems can learn skills. His new startup, Core Automation, is dedicated to this approach.

Not everyone shares this skepticism, though. Deepmind researcher Adam Brown recently defended the potential of the current LLM architecture. He compares the token prediction mechanism to biological evolution: a simple rule that, through massive scaling, creates emergent complexity that people perceive as understanding. Brown argues this complexity could even lead to consciousness.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Subscribe now

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
GPT reasoni…modelreasoningThe Decoder

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 151 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models