Classifying Identities: Subcubic Distributivity Checking and Hardness from Arithmetic Progression Detection
Hey there, little explorer! 🚀
Imagine you have some special toys, like building blocks, and some rules for how they fit together. Like, if you put a red block and a blue block together, what happens?
Scientists (grown-ups who love puzzles!) want to find the fastest way to check if these rules always work for ALL the blocks. It's like checking if "red + blue = blue + red" is always true, no matter which blocks you pick!
This paper is super exciting because they found a NEW, super-duper fast way to check one of these rules, called "distributivity." It's like finding a shortcut to make sure all your blocks play nicely together, much quicker than before! 🏎️💨
So, they made computers smarter and faster at checking rules! Yay! 🎉
arXiv:2603.28843v1 Announce Type: new Abstract: We revisit the complexity of verifying basic identities, such as associativity and distributivity, on a given finite algebraic structure. In particular, while Rajagopalan and Schulman (FOCS'96, SICOMP'00) gave a surprising randomized algorithm to verify associativity of an operation $\odot: S\times S\to S$ in optimal time $O(|S|^2)$, they left the open problem of finding any subcubic algorithm for verifying distributivity of given operations $\odot,\oplus: S\times S\to S$. Our results are as follows: * We resolve the open problem by Rajagopalan and Schulman by devising an algorithm verifying distributivity in strongly subcubic time $O(|S|^\omega)$, together with a matching conditional lower bound based on the Triangle Detection Hypothesis. *
View PDF HTML (experimental)
Abstract:We revisit the complexity of verifying basic identities, such as associativity and distributivity, on a given finite algebraic structure. In particular, while Rajagopalan and Schulman (FOCS'96, SICOMP'00) gave a surprising randomized algorithm to verify associativity of an operation $\odot: S\times S\to S$ in optimal time $O(|S|^2)$, they left the open problem of finding any subcubic algorithm for verifying distributivity of given operations $\odot,\oplus: S\times S\to S$. Our results are as follows:
- We resolve the open problem by Rajagopalan and Schulman by devising an algorithm verifying distributivity in strongly subcubic time $O(|S|^\omega)$, together with a matching conditional lower bound based on the Triangle Detection Hypothesis.
- We propose arithmetic progression detection in small universes as a consequential algorithmic challenge: We show that unless we can detect $4$-term arithmetic progressions in a set $X\subseteq{1,\dots, N}$ in time $O(N^{2-\epsilon})$, then (a) the 3-uniform 4-hyperclique hypothesis is true, and (b) verifying certain identities requires running time~$|S|^{3-o(1)}$.
- A careful combination of our algorithmic and hardness ideas allows us to \emph{fully classify} a natural subclass of identities: Specifically, any 3-variable identity over binary operations in which no side is a subexpression of the other is either: (1) verifiable in randomized time $O(|S|^2)$, (2) verifiable in randomized time $O(|S|^\omega)$ with a matching lower bound from triangle detection, or (3) trivially verifiable in time $O(|S|^3)$ with a matching lower bound from hardness of 4-term arithmetic progression detection.
- We obtain near-optimal algorithms for verifying whether a given algebraic structure forms a field or ring, and show that \emph{counting} the number of distributive triples is conditionally harder than verifying distributivity.
Comments: To appear at STOC 2026
Subjects:
Data Structures and Algorithms (cs.DS)
Cite as: arXiv:2603.28843 [cs.DS]
(or arXiv:2603.28843v1 [cs.DS] for this version)
https://doi.org/10.48550/arXiv.2603.28843
arXiv-issued DOI via DataCite
Submission history
From: Bartlomiej Dudek [view email] [v1] Mon, 30 Mar 2026 17:42:29 UTC (84 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcearxiv
If Memory Could Compute, Would We Still Need GPUs?
If Memory Could Compute, Would We Still Need GPUs? The bottleneck for LLM inference isn't GPU compute. It's memory bandwidth. A February 2026 ArXiv paper (arXiv:2601.05047) states it plainly: the primary challenges for LLM inference are memory and interconnect, not computation. GPU arithmetic units spend more than half their time idle, waiting for data to arrive. So flip the paradigm. Compute where the data lives, and data movement disappears. This is the core idea behind Processing-in-Memory (PIM). SK Hynix's AiM is shipping as a commercial product. Samsung announced LPDDR5X-PIM in February 2026. HBM4 integrates logic dies, turning the memory stack itself into a co-processor. Is the GPU era ending? Short answer: no. But PIM will change LLM inference architecture. How far the change goes,

Anthropic's Claude Desktop Apps Gain Windows Support for Computer Use Feature
Anthropic has released Windows versions of Claude Code Desktop and Claude Cowork, bringing the 'computer use' feature—which allows the AI to interact with files and applications on a user's computer—to the platform. This follows the macOS release and marks a key step in Anthropic's desktop strategy. Anthropic's Claude Desktop Apps Gain Windows Support for Computer Use Feature Anthropic has expanded the availability of its desktop applications, Claude Code Desktop and Claude Cowork , to the Windows operating system. The official launch, announced via the company's social media channels, brings a critical capability— "computer use" —to Windows users for the first time. What Happened The core announcement is straightforward: the Claude Desktop applications now support Windows . Previously, th
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
b8670
model : add HunyuanOCR support ( #21395 ) HunyuanOCR: add support for text and vision models Add HunyuanOCR vision projector (perceiver-based) with Conv2d merge Add separate HUNYUAN_OCR chat template (content-before-role format) Handle HunyuanOCR's invalid pad_token_id=-1 in converter Fix EOS/EOT token IDs from generation_config.json Support xdrope RoPE scaling type Add tensor mappings for perceiver projector (mm.before_rms, mm.after_rms, etc.) Register HunYuanVLForConditionalGeneration for both text and mmproj conversion fix proper mapping Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Xuan-Son Nguyen [email protected] Update tools/mtmd/clip.cpp Co-authored-by: Xuan-Son Nguyen [email protected] address comments update Fix typecheck Update convert_hf_to_gguf.py Co-authored-by: S
ciflow/inductor/179229: [inductor] makes cuda 13.0 cross compliation works (#179229)
Summary: CUDA 13.0+ ships a hybrid cudart.lib that contains MSVC-compiled static objects referencing GS security symbols (__security_cookie, __security_check_cookie, etc.) which MinGW cannot resolve, causing undefined reference errors during Windows cross-compilation. This diff adds a _ensure_mingw_cudart_import_lib() function to cpp_builder.py that automatically generates a pure MinGW-compatible import library (libcudart.a) from the CUDA runtime DLL using gendef + x86_64-w64-mingw32-dlltool. If WINDOWS_CUDA_HOME is not set or libcudart.a already exists, no action is taken. Locates cudart64_*.dll in the CUDA bin directory, generates a .def file via gendef, and creates libcudart.a via dlltool. Falls back gracefully to the original cudart.lib if gendef/dlltool are unavailable or fail to supp



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!