Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessAnnouncing Doublehaven with Reflections on HumourLessWrong AIHow a Monorepo Keeps Multiple Projects in Sync - From Shared Code to Atomic DeploymentsDEV CommunityStep‑by‑Step Guide: Generate PowerPoint Slides Using Copilot Studio AgentDEV CommunitySecuring the Agentic Frontier: Why Your AI Agents Need a "Citadel" 🏰DEV CommunityClaude Code's Leaked Source: A Real-World Masterclass in Harness EngineeringDEV CommunityI Built an AI PPT Maker and Resume Builder WebsiteDEV CommunityHDF5 vs. TsFile: Efficient Time-Series Data StorageDEV CommunityFinnish neurowellness startup Audicin raises $1.9MThe Next Web NeuralThere Is No Such Thing As a ServiceDEV CommunityHow MERX Aggregates All Energy Providers Into One APIDEV CommunityNew Map Split Code in Nebula: Say Goodbye to Endless and Opaque C++ BuildsDEV Community🙀 Anthropic accidentally leaked Claude Code's entire source code - The NeuronGoogle News: ClaudeBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessAnnouncing Doublehaven with Reflections on HumourLessWrong AIHow a Monorepo Keeps Multiple Projects in Sync - From Shared Code to Atomic DeploymentsDEV CommunityStep‑by‑Step Guide: Generate PowerPoint Slides Using Copilot Studio AgentDEV CommunitySecuring the Agentic Frontier: Why Your AI Agents Need a "Citadel" 🏰DEV CommunityClaude Code's Leaked Source: A Real-World Masterclass in Harness EngineeringDEV CommunityI Built an AI PPT Maker and Resume Builder WebsiteDEV CommunityHDF5 vs. TsFile: Efficient Time-Series Data StorageDEV CommunityFinnish neurowellness startup Audicin raises $1.9MThe Next Web NeuralThere Is No Such Thing As a ServiceDEV CommunityHow MERX Aggregates All Energy Providers Into One APIDEV CommunityNew Map Split Code in Nebula: Say Goodbye to Endless and Opaque C++ BuildsDEV Community🙀 Anthropic accidentally leaked Claude Code's entire source code - The NeuronGoogle News: Claude

Convergent Representations of Linguistic Constructions in Human and Artificial Neural Systems

arXiv q-bio.NCby Pegah Ramezani, Thomas Kinfe, Andreas Maier, Achim Schilling, Patrick KraussApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.29617v1 Announce Type: new Abstract: Understanding how the brain processes linguistic constructions is a central challenge in cognitive neuroscience and linguistics. Recent computational studies show that artificial neural language models spontaneously develop differentiated representations of Argument Structure Constructions (ASCs), generating predictions about when and how construction-level information emerges during processing. The present study tests these predictions in human neural activity using electroencephalography (EEG). Ten native English speakers listened to 200 synthetically generated sentences across four construction types (transitive, ditransitive, caused-motion, resultative) while neural responses were recorded. Analyses using time-frequency methods, feature e

View PDF HTML (experimental)

Abstract:Understanding how the brain processes linguistic constructions is a central challenge in cognitive neuroscience and linguistics. Recent computational studies show that artificial neural language models spontaneously develop differentiated representations of Argument Structure Constructions (ASCs), generating predictions about when and how construction-level information emerges during processing. The present study tests these predictions in human neural activity using electroencephalography (EEG). Ten native English speakers listened to 200 synthetically generated sentences across four construction types (transitive, ditransitive, caused-motion, resultative) while neural responses were recorded. Analyses using time-frequency methods, feature extraction, and machine learning classification revealed construction-specific neural signatures emerging primarily at sentence-final positions, where argument structure becomes fully disambiguated, and most prominently in the alpha band. Pairwise classification showed reliable differentiation, especially between ditransitive and resultative constructions, while other pairs overlapped. Crucially, the temporal emergence and similarity structure of these effects mirror patterns in recurrent and transformer-based language models, where constructional representations arise during integrative processing stages. These findings support the view that linguistic constructions are neurally encoded as distinct form-meaning mappings, in line with Construction Grammar, and suggest convergence between biological and artificial systems on similar representational solutions. More broadly, this convergence is consistent with the idea that learning systems discover stable regions within an underlying representational landscape - recently termed a Platonic representational space - that constrains the emergence of efficient linguistic abstractions.

Subjects:

Neurons and Cognition (q-bio.NC); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

Cite as: arXiv:2603.29617 [q-bio.NC]

(or arXiv:2603.29617v1 [q-bio.NC] for this version)

https://doi.org/10.48550/arXiv.2603.29617

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Patrick Krauss [view email] [v1] Tue, 31 Mar 2026 11:37:50 UTC (1,123 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modeltransformer

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Convergent …modellanguage mo…transformerannouncefeaturepredictionarXiv q-bio…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 226 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models