Exact Separation of Words via Trace Geometry
arXiv:2603.29411v1 Announce Type: new Abstract: A basic question in the theory of two-state measure-once quantum finite automata (MO-QFAs) is whether two distinct input words can be separated with certainty. In the setting considered here, this exact separation problem reduces to a trace-vanishing question in \(SU(2)\): given distinct positive words \(u\) and \(v\), find matrices \(A,B\in SU(2)\) such that the evaluated trace of \(u^{-1}v\) is zero. The central difficulty lies in the genuinely nonabelian regime where \(u\) and \(v\) have the same abelianization, so the obvious commutative information disappears and the fine structure of the word must be connected to the geometry of representations. This paper develops a slice-driven framework for that task and proves exact separation for e
View PDF HTML (experimental)
Abstract:A basic question in the theory of two-state measure-once quantum finite automata (MO-QFAs) is whether two distinct input words can be separated with certainty. In the setting considered here, this exact separation problem reduces to a trace-vanishing question in (SU(2)): given distinct positive words (u) and (v), find matrices (A,B\in SU(2)) such that the evaluated trace of (u^{-1}v) is zero. The central difficulty lies in the genuinely nonabelian regime where (u) and (v) have the same abelianization, so the obvious commutative information disappears and the fine structure of the word must be connected to the geometry of representations. This paper develops a slice-driven framework for that task and proves exact separation for every hard positive-word difference covered by four explicit certified conditions, thereby reducing the problem to a sharply delimited residual super-degenerate class. The method extracts algebraic data from the positive-word difference and uses them to select explicit low-dimensional families in (SU(2)^2) on which the trace becomes computable. On the algebraic side, the metabelian polynomial is decomposed into explicit interval blocks determined by prefix statistics, and a suitable slope specialization preserves nontrivial information. On the analytic side, the paper derives a computable quadratic trace identity on a visible one-parameter family and complements it with a Laurent-matrix sum-of-squares identity in a parallel algebraic model. These certified criteria are already strong in numerical experiments. This paper also shows that no method based only on finitely many finite-image tests can be universal.
Subjects:
Formal Languages and Automata Theory (cs.FL)
Cite as: arXiv:2603.29411 [cs.FL]
(or arXiv:2603.29411v1 [cs.FL] for this version)
https://doi.org/10.48550/arXiv.2603.29411
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Zeyu Chen [view email] [v1] Tue, 31 Mar 2026 08:18:26 UTC (41 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncepaper
KernelEvolve: How Meta’s Ranking Engineer Agent Optimizes AI Infrastructure
This is the second post in the Ranking Engineer Agent blog series exploring the autonomous AI capabilities accelerating Meta s Ads Ranking innovation. The previous post introduced Ranking Engineer Agent s ML exploration capability, which autonomously designs, executes, and analyzes ranking model experiments. This post covers how to optimize the low-level infrastructure that makes those models run [...] Read More... The post KernelEvolve: How Meta’s Ranking Engineer Agent Optimizes AI Infrastructure appeared first on Engineering at Meta .
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Springing into AI: PyTorch Conference Europe and ICLR 2026
Article URL: https://www.collabora.com/news-and-blog/news-and-events/springing-into-ai-pytorch-conference-europe-and-iclr-2026.html Comments URL: https://news.ycombinator.com/item?id=47619120 Points: 2 # Comments: 0

Vector researchers presenting more than 98 papers at NeurIPS 2024
Leading researchers from Vector are presenting groundbreaking research at this year s Conference on Neural Information Processing Systems (NeurIPS). The conference, taking place December 10-15 in Vancouver and online, showcases innovative [ ] The post Vector researchers presenting more than 98 papers at NeurIPS 2024 appeared first on Vector Institute for Artificial Intelligence .




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!