Foundations of Polar Linear Algebra
Hey there, little explorer! Imagine you have a special magic crayon that draws in circles, not just straight lines. 🖍️✨
Scientists found a new way to teach computers to learn, like teaching a robot to recognize numbers. Instead of just looking at things in a normal way, they taught the computer to look at things in circles, like a spinning pinwheel! 🌀
This makes the computer super smart and fast, like giving it a shortcut to solve puzzles. It can even learn better with fewer mistakes! So, it's like a new, fun way for computers to think and learn, using circles and spins! Isn't that cool?
arXiv:2603.28939v1 Announce Type: new Abstract: This work revisits operator learning from a spectral perspective by introducing Polar Linear Algebra, a structured framework based on polar geometry that combines a linear radial component with a periodic angular component. Starting from this formulation, we define the associated operators and analyze their spectral properties. As a proof of feasibility, the framework is evaluated on a canonical benchmark (MNIST). Despite the simplicity of the task, the results demonstrate that polar and fully spectral operators can be trained reliably, and that imposing self-adjoint-inspired spectral constraints improves stability and convergence. Beyond accuracy, the proposed formulation leads to a reduction in parameter count and computational complexity,
View PDF HTML (experimental)
Abstract:This work revisits operator learning from a spectral perspective by introducing Polar Linear Algebra, a structured framework based on polar geometry that combines a linear radial component with a periodic angular component. Starting from this formulation, we define the associated operators and analyze their spectral properties. As a proof of feasibility, the framework is evaluated on a canonical benchmark (MNIST). Despite the simplicity of the task, the results demonstrate that polar and fully spectral operators can be trained reliably, and that imposing self-adjoint-inspired spectral constraints improves stability and convergence. Beyond accuracy, the proposed formulation leads to a reduction in parameter count and computational complexity, while providing a more interpretable representation in terms of decoupled spectral modes. By moving from a spatial to a spectral domain, the problem decomposes into orthogonal eigenmodes that can be treated as independent computational pipelines. This structure naturally exposes an additional dimension of model parallelization, complementing existing parallel strategies without relying on ad-hoc partitioning. Overall, the work offers a different conceptual lens for operator learning, particularly suited to problems where spectral structure and parallel execution are central.
Comments: 59 pages, 4 figures, including appendices
Subjects:
Machine Learning (cs.LG); Numerical Analysis (math.NA)
Cite as: arXiv:2603.28939 [cs.LG]
(or arXiv:2603.28939v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.28939
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Giovanni Guasti [view email] [v1] Mon, 30 Mar 2026 19:17:40 UTC (3,422 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkannounce
Paper close reading: "Why Language Models Hallucinate"
People often talk about paper reading as a skill, but there aren’t that many examples of people walking through how they do it. Part of this is a problem of supply: it’s expensive to document one’s thought process for any significant length of time, and there’s the additional cost of probably looking quite foolish when doing so. Part of this is simply a question of demand: far more people will read a short paragraph or tweet thread summarizing a paper and offering some pithy comments, than a thousand-word post of someone’s train of thought as they look through a paper. Thankfully, I’m willing to risk looking a bit foolish, and I’m pretty unresponsive to demand at this present moment, so I’ll try and write down my thought processes as I read through as much of a a paper I can in 1-2 hours.

OpenClaw Changed How We Use AI. KiloClaw Made It Effortless to Get Started
OpenClaw is a powerful open-source AI agent, but self-hosting it is a pain. KiloClaw is OpenClaw fully hosted and managed by Kilo — sign up, connect your chat apps, and your agent is running in about a minute. No Docker, no YAML, no server babysitting. People are using it for personalized morning briefs, inbox digests, auto-building CRMs, browser automation, GitHub triage, and more. Hosting is $8/month with a 7-day free trial, inference runs through Kilo Gateway at zero markup across 500+ models, and it's free for open-source maintainers. Read All
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Paper close reading: "Why Language Models Hallucinate"
People often talk about paper reading as a skill, but there aren’t that many examples of people walking through how they do it. Part of this is a problem of supply: it’s expensive to document one’s thought process for any significant length of time, and there’s the additional cost of probably looking quite foolish when doing so. Part of this is simply a question of demand: far more people will read a short paragraph or tweet thread summarizing a paper and offering some pithy comments, than a thousand-word post of someone’s train of thought as they look through a paper. Thankfully, I’m willing to risk looking a bit foolish, and I’m pretty unresponsive to demand at this present moment, so I’ll try and write down my thought processes as I read through as much of a a paper I can in 1-2 hours.

Qwen3.5-4B GGUF quants comparison (KLD vs speed) - Lunar Lake
I wanted to know which type of quant is the best on this laptop (Intel 258V - iGPU 140V 18GB), so I tested all these small quants hoping that it generalizes to bigger models: Winners in bold (KLD≤0.01) Uploader Quant tk/s KLD GB KLD/GB* mradermacher* Q4_0 28.97 0.052659918 2.37 0.04593 mradermacher_i1 Q4_0 28.89 0.059171561 2.37 0.05162 mradermacher_i1 IQ3_XXS 28.59 0.177140713 1.77 0.20736 Unsloth UD-IQ2_XXS 28.47 0.573673327 1.42 0.83747 Unsloth Q4_0 28.3 0.053431218 2.41 0.04583 Bartowski Q4_0 28.28 0.049796789 2.45 0.04200 mradermacher Q4_K_S 27.74 0.050305722 2.39 0.04350 Unsloth Q4_K_S 27.29 0.028402815 2.41 0.02429 Unsloth UD-IQ3_XXS 27.03 0.146879419 1.82 0.16718 mradermacher Q2_K 26.98 0.858648176 1.78 1.00000 mradermacher_i1 Q4_K_M 25.95 0.026540567 2.52 0.02169 mradermacher_i1 I

Goal-Conditioned Neural ODEs with Guaranteed Safety and Stability for Learning-Based All-Pairs Motion Planning
arXiv:2604.02821v1 Announce Type: new Abstract: This paper presents a learning-based approach for all-pairs motion planning, where the initial and goal states are allowed to be arbitrary points in a safe set. We construct smooth goal-conditioned neural ordinary differential equations (neural ODEs) via bi-Lipschitz diffeomorphisms. Theoretical results show that the proposed model can provide guarantees of global exponential stability and safety (safe set forward invariance) regardless of goal location. Moreover, explicit bounds on convergence rate, tracking error, and vector field magnitude are established. Our approach admits a tractable learning implementation using bi-Lipschitz neural networks and can incorporate demonstration data. We illustrate the effectiveness of the proposed method



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!