Metriplector: From Field Theory to Neural Architecture
arXiv:2603.29496v1 Announce Type: new Abstract: We present Metriplector, a neural architecture primitive in which the input configures an abstract physical system--fields, sources, and operators--and the dynamics of that system is the computation. Multiple fields evolve via coupled metriplectic dynamics, and the stress-energy tensor T^{{\mu}{\nu}}, derived from Noether's theorem, provides the readout. The metriplectic formulation admits a natural spectrum of instantiations: the dissipative branch alone yields a screened Poisson equation solved exactly via conjugate gradient; activating the full structure--including the antisymmetric Poisson bracket--gives field dynamics for image recognition and language modeling. We evaluate Metriplector across four domains, each using a task-specific arc
View PDF HTML (experimental)
Abstract:We present Metriplector, a neural architecture primitive in which the input configures an abstract physical system--fields, sources, and operators--and the dynamics of that system is the computation. Multiple fields evolve via coupled metriplectic dynamics, and the stress-energy tensor T^{{\mu}{\nu}}, derived from Noether's theorem, provides the readout. The metriplectic formulation admits a natural spectrum of instantiations: the dissipative branch alone yields a screened Poisson equation solved exactly via conjugate gradient; activating the full structure--including the antisymmetric Poisson bracket--gives field dynamics for image recognition and language modeling. We evaluate Metriplector across four domains, each using a task-specific architecture built from this shared primitive with progressively richer physics: F1=1.0 on maze pathfinding, generalizing from 15x15 training grids to unseen 39x39 grids; 97.2% exact Sudoku solve rate with zero structural injection; 81.03% on CIFAR-100 with 2.26M parameters; and 1.182 bits/byte on language modeling with 3.6x fewer training tokens than a GPT baseline.
Comments: 30 pages, 7 figures
Subjects:
Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
MSC classes: 68T07, 37K05, 70H33
ACM classes: I.2.6; F.2.2
Cite as: arXiv:2603.29496 [cs.AI]
(or arXiv:2603.29496v1 [cs.AI] for this version)
https://doi.org/10.48550/arXiv.2603.29496
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Peter Toth [view email] [v1] Tue, 31 Mar 2026 09:40:26 UTC (210 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modeltraining
OpenAI, Anthropic, Google Unite to Combat Model Copying in China
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race.

Press Releases vs RSS vs AI Feeds: Why Structured Government Data Matters
The Problem: Correct Information, Wrong Source AI-generated answers are often right—but not fully right. A water advisory is summarized correctly. The date is accurate. The guidance is clear. But the issuing authority is wrong. A city-issued notice becomes attributed to a county. That distinction defines jurisdiction, responsibility, and public response—yet the answer appears as if no difference exists. This is not a failure of content. It is a failure of structure. How AI Systems Actually Process Information AI systems do not read documents as fixed units. They: Ingest fragments (sentences, paragraphs, snippets) Store patterns, not pages Reconstruct answers probabilistically During this process: Content becomes separated from its source. That means: Attribution is inferred Timing is appro

AI slop got better, so now maintainers have more work
Once AI bug reports become plausible, someone still has to verify them If AI does more of the work but humans still have to check it, you need more reviewers. Now that AI models have gotten better at writing and evaluating code, open-source projects find themselves overwhelmed with the too-good-to-ignore output.…
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

AMD's AI director slams Claude Code for becoming dumber and lazier since last update
'Claude cannot be trusted to perform complex engineering tasks' according to GitHub ticket If you've noticed Claude Code's performance degrading to the point where you find you don't trust it to handle complicated tasks anymore, you're not alone.…




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!