Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessNvidia’s AI Powerhouse Rally Ignites Fresh Wall Street Hype - TipRanksGNews AI NVIDIAOpenAI Called The One Person AI Startup And Three Founders Proved It - ForbesGoogle News: OpenAIAnthropic Just Leaked Its Own AI Secrets. Here’s What It Means for You.Towards AITutorial - How to Toggle On/OFf the Thinking Mode Directly in LM Studio for Any Thinking ModelReddit r/LocalLLaMAThe Real Reason OpenAI Shut Sora Down Is a Warning to Every AI Startup - FuturismGoogle News: OpenAIDeep Machine Learning - Artificial Neural Network - - TradingViewGoogle News: Machine LearningChinese firms market Iran war intelligence ‘exposing’ U.S. forces - The Washington PostGNews AI military[P] Implemented ACT-R cognitive decay and hyperdimensional computing for AI agent memory (open source)Reddit r/MachineLearningtrunk/8c8414e5c03f21b5405acc2fd9115f4448dcd08a: revert https://github.com/pytorch/pytorch/pull/172340 (#179151)PyTorch ReleasesWhite Lake group to host April 14 program on how artificial intelligence works - Shoreline Media GroupGoogle News: AINvidia’s $2 billion Marvell bet is not an investment. It is a toll booth.The Next Web NeuralNvidia’s $2 billion Marvell bet is not an investment. It is a toll booth. - The Next WebGNews AI NVIDIABlack Hat USADark ReadingBlack Hat AsiaAI BusinessNvidia’s AI Powerhouse Rally Ignites Fresh Wall Street Hype - TipRanksGNews AI NVIDIAOpenAI Called The One Person AI Startup And Three Founders Proved It - ForbesGoogle News: OpenAIAnthropic Just Leaked Its Own AI Secrets. Here’s What It Means for You.Towards AITutorial - How to Toggle On/OFf the Thinking Mode Directly in LM Studio for Any Thinking ModelReddit r/LocalLLaMAThe Real Reason OpenAI Shut Sora Down Is a Warning to Every AI Startup - FuturismGoogle News: OpenAIDeep Machine Learning - Artificial Neural Network - - TradingViewGoogle News: Machine LearningChinese firms market Iran war intelligence ‘exposing’ U.S. forces - The Washington PostGNews AI military[P] Implemented ACT-R cognitive decay and hyperdimensional computing for AI agent memory (open source)Reddit r/MachineLearningtrunk/8c8414e5c03f21b5405acc2fd9115f4448dcd08a: revert https://github.com/pytorch/pytorch/pull/172340 (#179151)PyTorch ReleasesWhite Lake group to host April 14 program on how artificial intelligence works - Shoreline Media GroupGoogle News: AINvidia’s $2 billion Marvell bet is not an investment. It is a toll booth.The Next Web NeuralNvidia’s $2 billion Marvell bet is not an investment. It is a toll booth. - The Next WebGNews AI NVIDIA
AI NEWS HUBbyEIGENVECTOREigenvector

Sublinear-query relative-error testing of halfspaces

arXiv cs.DSby [Submitted on 2 Apr 2026]April 3, 20262 min read1 views
Source Quiz

arXiv:2604.01557v1 Announce Type: new Abstract: The relative-error property testing model was introduced in [CDHLNSY24] to facilitate the study of property testing for "sparse" Boolean-valued functions, i.e. ones for which only a small fraction of all input assignments satisfy the function. In this framework, the distance from the unknown target function $f$ that is being tested to a function $g$ is defined as $\mathrm{Vol}(f \mathop{\triangle} g)/\mathrm{Vol}(f)$, where the numerator is the fraction of inputs on which $f$ and $g$ disagree and the denominator is the fraction of inputs that satisfy $f$. Recent work [CDHNSY26] has shown that over the Boolean domain $\{0,1\}^n$, any relative-error testing algorithm for the fundamental class of halfspaces (i.e. linear threshold functions) must

View PDF

Abstract:The relative-error property testing model was introduced in [CDHLNSY24] to facilitate the study of property testing for "sparse" Boolean-valued functions, i.e. ones for which only a small fraction of all input assignments satisfy the function. In this framework, the distance from the unknown target function $f$ that is being tested to a function $g$ is defined as $\mathrm{Vol}(f \mathop{\triangle} g)/\mathrm{Vol}(f)$, where the numerator is the fraction of inputs on which $f$ and $g$ disagree and the denominator is the fraction of inputs that satisfy $f$. Recent work [CDHNSY26] has shown that over the Boolean domain ${0,1}^n$, any relative-error testing algorithm for the fundamental class of halfspaces (i.e. linear threshold functions) must make $\Omega(\log n)$ oracle calls. In this paper we complement the [CDHNSY26] lower bound by showing that halfspaces can be relative-error tested over $\mathbb{R}^n$ under the standard $N(0,I_n)$ Gaussian distribution using a sublinear number of oracle calls -- in particular, substantially fewer than would be required for learning. Our results use a wide range of tools including Hermite analysis, Gaussian isoperimetric inequalities, and geometric results on noise sensitivity and surface area.

Subjects:

Data Structures and Algorithms (cs.DS); Computational Complexity (cs.CC)

Cite as: arXiv:2604.01557 [cs.DS]

(or arXiv:2604.01557v1 [cs.DS] for this version)

https://doi.org/10.48550/arXiv.2604.01557

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Yizhi Huang [view email] [v1] Thu, 2 Apr 2026 03:01:25 UTC (197 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannounceanalysis

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Sublinear-q…modelannounceanalysisstudypaperarxivarXiv cs.DS

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 156 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers