Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessMichael Jordan, 63, credits one trait for making him great: 'It keeps me young'Business InsiderThe Axios Supply Chain Attack Explained: How a Compromised npm Account Put 83 Million Projects at RiskDEV CommunityFrom Zero to Everything: The Story of My First ProjectDEV CommunityHow I Stopped Hallucinations in My AI Application Built on AWS BedrockDEV CommunityThe Agent Economy Needs Infrastructure, Not CustodyDEV CommunityBeyond Static RAG: Using 1958 Biochemistry to Beat Multi-Hop Retrieval by 14%DEV CommunityInside the Anthropic leak: 4 hidden Claude features that could redefine AI forever - Moneycontrol.comGoogle News: ClaudeWe Benchmarked Our SSR Framework Against Next.js — Here's What We FoundDEV CommunityOpenAI’s Secret Project to Train ChatGPT on 400+ Specialized Jobs - Startup FortuneGoogle News: ChatGPTI Built a Cross-Platform Memory Layer for AI Agents Using Ebbinghaus Forgetting CurvesDEV CommunityHow I Deployed My Portfolio Website on AWS S3 and Secured It with CloudFrontDEV CommunityCampus ChatGPT subscription revealed to be a trap all along - The Duke ChronicleGoogle News: ChatGPTBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessMichael Jordan, 63, credits one trait for making him great: 'It keeps me young'Business InsiderThe Axios Supply Chain Attack Explained: How a Compromised npm Account Put 83 Million Projects at RiskDEV CommunityFrom Zero to Everything: The Story of My First ProjectDEV CommunityHow I Stopped Hallucinations in My AI Application Built on AWS BedrockDEV CommunityThe Agent Economy Needs Infrastructure, Not CustodyDEV CommunityBeyond Static RAG: Using 1958 Biochemistry to Beat Multi-Hop Retrieval by 14%DEV CommunityInside the Anthropic leak: 4 hidden Claude features that could redefine AI forever - Moneycontrol.comGoogle News: ClaudeWe Benchmarked Our SSR Framework Against Next.js — Here's What We FoundDEV CommunityOpenAI’s Secret Project to Train ChatGPT on 400+ Specialized Jobs - Startup FortuneGoogle News: ChatGPTI Built a Cross-Platform Memory Layer for AI Agents Using Ebbinghaus Forgetting CurvesDEV CommunityHow I Deployed My Portfolio Website on AWS S3 and Secured It with CloudFrontDEV CommunityCampus ChatGPT subscription revealed to be a trap all along - The Duke ChronicleGoogle News: ChatGPT

Sample Complexity Analysis of Multi-Target Detection via Markovian and Hard-Core Multi-Reference Alignment

arXiv eess.SPby Kweku Abraham, Amnon Balanov, Tamir Bendory, Carlos Esteve-Yag\"ueMarch 31, 20261 min read0 views
Source Quiz

arXiv:2510.17775v3 Announce Type: replace Abstract: Motivated by single-particle cryo-electron microscopy, we study the sample complexity of the multi-target detection (MTD) problem, in which an unknown signal appears multiple times at unknown locations within a long, noisy observation. We propose a patching scheme that reduces MTD to a non-i.i.d. multi-reference alignment (MRA) model. In the one-dimensional setting, the latent group elements form a Markov chain, and we show that the convergence rate of any estimator matches that of the corresponding i.i.d. MRA model, up to a logarithmic factor in the number of patches. Moreover, for estimators based on empirical averaging, such as the method of moments, the convergence rates are identical in both settings. We further establish an analogou

View PDF HTML (experimental)

Abstract:Motivated by single-particle cryo-electron microscopy, we study the sample complexity of the multi-target detection (MTD) problem, in which an unknown signal appears multiple times at unknown locations within a long, noisy observation. We propose a patching scheme that reduces MTD to a non-i.i.d. multi-reference alignment (MRA) model. In the one-dimensional setting, the latent group elements form a Markov chain, and we show that the convergence rate of any estimator matches that of the corresponding i.i.d. MRA model, up to a logarithmic factor in the number of patches. Moreover, for estimators based on empirical averaging, such as the method of moments, the convergence rates are identical in both settings. We further establish an analogous result in two dimensions, where the latent structure arises from an exponentially mixing random field generated by a hard-core placement model. As a consequence, if the signal in the corresponding i.i.d. MRA model is determined by moments up to order $n_{\min}$, then in the low-SNR regime the number of patches required to estimate the signal in the MTD model scales as $\sigma^{2n_{\min}}$, where $\sigma^2$ denotes the noise variance.

Subjects:

Signal Processing (eess.SP); Information Theory (cs.IT)

Cite as: arXiv:2510.17775 [eess.SP]

(or arXiv:2510.17775v3 [eess.SP] for this version)

https://doi.org/10.48550/arXiv.2510.17775

arXiv-issued DOI via DataCite

Submission history

From: Carlos Esteve-Yagüe [view email] [v1] Mon, 20 Oct 2025 17:35:19 UTC (2,063 KB) [v2] Mon, 15 Dec 2025 14:20:36 UTC (557 KB) [v3] Mon, 30 Mar 2026 09:12:14 UTC (1,039 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannounceanalysis

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Sample Comp…modelannounceanalysisstudyalignmentarxivarXiv eess.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 174 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Analyst News