Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessExplained: The Source Code Leak that hit AI Giant Anthropic - Cyber MagazineGoogle News: ClaudeDespite Skepticism, Survey Shows Widespread AI Use at Cal State - Inside Higher EdGoogle News: ChatGPTBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIYour AI Agent Did Something It Wasn't Supposed To. Now What?Dev.to AITrust drives Korea’s generative AI adoption; usability and interaction sustain use - CHOSUNBIZ - ChosunbizGoogle News: Generative AIThe Model You Love Is Probably Just the One You UseO'Reilly Radar3 of Your AI Agents Crashed and You Found Out From CustomersDev.to AIYour AI Agent Is Running Wild and You Can't Stop ItDev.to AIYour AI Agent Spent $500 Overnight and Nobody NoticedDEV CommunityWhy Software Project Estimates Are Always Wrong (And How to Fix It)DEV CommunityChatGPT vs. Claude: 7 real-life benchmarks that crown the 2026 AI Madness Champion - Tom's GuideGoogle News: ChatGPTHow to Build a Responsible AI Framework for Transparent, Ethical, and Secure AppsDev.to AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessExplained: The Source Code Leak that hit AI Giant Anthropic - Cyber MagazineGoogle News: ClaudeDespite Skepticism, Survey Shows Widespread AI Use at Cal State - Inside Higher EdGoogle News: ChatGPTBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIYour AI Agent Did Something It Wasn't Supposed To. Now What?Dev.to AITrust drives Korea’s generative AI adoption; usability and interaction sustain use - CHOSUNBIZ - ChosunbizGoogle News: Generative AIThe Model You Love Is Probably Just the One You UseO'Reilly Radar3 of Your AI Agents Crashed and You Found Out From CustomersDev.to AIYour AI Agent Is Running Wild and You Can't Stop ItDev.to AIYour AI Agent Spent $500 Overnight and Nobody NoticedDEV CommunityWhy Software Project Estimates Are Always Wrong (And How to Fix It)DEV CommunityChatGPT vs. Claude: 7 real-life benchmarks that crown the 2026 AI Madness Champion - Tom's GuideGoogle News: ChatGPTHow to Build a Responsible AI Framework for Transparent, Ethical, and Secure AppsDev.to AI

The human brain may work more like AI than anyone expected

ScienceDaily AIJanuary 21, 20261 min read0 views
Source Quiz

Scientists have discovered that the human brain understands spoken language in a way that closely resembles how advanced AI language models work. By tracking brain activity as people listened to a long podcast, researchers found that meaning unfolds step by step—much like the layered processing inside systems such as GPT-style models.

A new study suggests that the human brain understands spoken language through a stepwise process that closely resembles how advanced AI language models operate. By recording brain activity from people listening to a spoken story, researchers found that later stages of brain responses match deeper layers of AI systems, especially in well known language regions like Broca's area. The results call into question long standing rule-based ideas of language comprehension and are supported by a newly released public dataset that offers a powerful new way to study how meaning is formed in the brain.

The research, published in Nature Communications, was led by Dr. Ariel Goldstein of the Hebrew University with collaborators Dr. Mariano Schain of Google Research and Prof Uri Hasson and Eric Ham from Princeton University. Together, the team uncovered an unexpected similarity between how humans make sense of speech and how modern AI models process text.

Using electrocorticography recordings from participants who listened to a thirty-minute podcast, the scientists tracked the timing and location of brain activity as language was processed. They found that the brain follows a structured sequence that closely matches the layered design of large language models such as GPT-2 and Llama 2.

How the Brain Builds Meaning Over Time

As we listen to someone speak, the brain does not grasp meaning all at once. Instead, each word passes through a series of neural steps. Goldstein and his colleagues showed that these steps unfold over time in a way that mirrors how AI models handle language. Early layers in AI focus on basic word features, while deeper layers combine context, tone, and broader meaning.

Human brain activity followed the same pattern. Early neural signals matched the early stages of AI processing, while later brain responses lined up with the deeper layers of the models. This timing match was especially strong in higher level language areas such as Broca's area, where responses peaked later when linked to deeper AI layers.

According to Dr. Goldstein, "What surprised us most was how closely the brain's temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding"

Why These Findings Matter

The study suggests that artificial intelligence can do more than generate text. It may also help scientists better understand how the human brain creates meaning. For many years, language was thought to rely mainly on fixed symbols and rigid hierarchies. These results challenge that view and instead point to a more flexible and statistical process in which meaning gradually emerges through context.

The researchers also tested traditional linguistic elements such as phonemes and morphemes. These classic features did not explain real time brain activity as well as the contextual representations produced by AI models. This supports the idea that the brain relies more on flowing context than on strict linguistic building blocks.

A New Resource for Language Neuroscience

To help move the field forward, the team has made the complete set of neural recordings and language features publicly available. This open dataset allows researchers around the world to compare theories of language understanding and to develop computational models that more closely reflect how the human mind works.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelresearch

Knowledge Map

Knowledge Map
TopicsEntitiesSource
The human b…modellanguage mo…researchScienceDail…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 201 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models