Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessDesktop Canary v2.1.48-canary.31LobeChat ReleasesThe Invisible Broken Clock in AI Video GenerationHackernoon AIMean field sequence: an introductionLessWrong AISwift package AI inference engine generated from Rust crateHacker News AI TopZeta-2 Turns Code Edits Into Context-Aware Rewrite SuggestionsHackernoon AIAI Tools That Actually Pay You Back: A Developer's Guide to Monetizing AIDev.to AIThe $6 Million Shockwave: How DeepSeek Just Broke the AI MonopolyMedium AIHow I Got My First Freelance Client in 3 Days (Using AI) — Beginner Guide (India 2026)Medium AIWhy Your Resume Gets Rejected Before a Human Sees It (And How to Fix It)Dev.to AII've Been Saying RAG Is Dead Since 2020Medium AIAI Print-on-Demand Passive Income: ₹400-2K/Design from HomeDev.to AIFaceless YouTube Automation with AI: Complete Guide 2026Dev.to AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessDesktop Canary v2.1.48-canary.31LobeChat ReleasesThe Invisible Broken Clock in AI Video GenerationHackernoon AIMean field sequence: an introductionLessWrong AISwift package AI inference engine generated from Rust crateHacker News AI TopZeta-2 Turns Code Edits Into Context-Aware Rewrite SuggestionsHackernoon AIAI Tools That Actually Pay You Back: A Developer's Guide to Monetizing AIDev.to AIThe $6 Million Shockwave: How DeepSeek Just Broke the AI MonopolyMedium AIHow I Got My First Freelance Client in 3 Days (Using AI) — Beginner Guide (India 2026)Medium AIWhy Your Resume Gets Rejected Before a Human Sees It (And How to Fix It)Dev.to AII've Been Saying RAG Is Dead Since 2020Medium AIAI Print-on-Demand Passive Income: ₹400-2K/Design from HomeDev.to AIFaceless YouTube Automation with AI: Complete Guide 2026Dev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

High-probability Convergence Guarantees of Decentralized SGD

arXiv cs.MAby [Submitted on 7 Oct 2025 (v1), last revised 1 Apr 2026 (this version, v4)]April 2, 20262 min read1 views
Source Quiz

arXiv:2510.06141v4 Announce Type: replace-cross Abstract: Convergence in high-probability (HP) has attracted increasing interest, due to implying exponentially decaying tail bounds and strong guarantees for individual runs of an algorithm. While many works study HP guarantees in centralized settings, much less is understood in the decentralized setup, where existing works require strong assumptions, like uniformly bounded gradients, or asymptotically vanishing noise. This results in a significant gap between the assumptions used to establish convergence in the HP and the mean-squared error (MSE) sense, and is also contrary to centralized settings, where it is known that $\mathtt{SGD}$ converges in HP under the same conditions on the cost function as needed for MSE convergence. Motivated by

View PDF HTML (experimental)

Abstract:Convergence in high-probability (HP) has attracted increasing interest, due to implying exponentially decaying tail bounds and strong guarantees for individual runs of an algorithm. While many works study HP guarantees in centralized settings, much less is understood in the decentralized setup, where existing works require strong assumptions, like uniformly bounded gradients, or asymptotically vanishing noise. This results in a significant gap between the assumptions used to establish convergence in the HP and the mean-squared error (MSE) sense, and is also contrary to centralized settings, where it is known that $\mathtt{SGD}$ converges in HP under the same conditions on the cost function as needed for MSE convergence. Motivated by these observations, we study the HP convergence of Decentralized $\mathtt{SGD}$ ($\mathtt{DSGD}$) in the presence of light-tailed noise, providing several strong results. First, we show that $\mathtt{DSGD}$ converges in HP under the same conditions on the cost as in the MSE sense, removing the restrictive assumptions used in prior works. Second, our sharp analysis yields order-optimal rates for both non-convex and strongly convex costs. Third, we establish a linear speed-up in the number of users, leading to matching, or strictly better transient times than those obtained from MSE results, further underlining the tightness of our analysis. To the best of our knowledge, this is the first work that shows $\mathtt{DSGD}$ achieves a linear speed-up in the HP sense. Our relaxed assumptions and sharp rates stem from several technical results of independent interest, including a result on the variance-reduction effect of decentralized methods in the HP sense, as well as a novel bound on the MGF of strongly convex costs, which is of interest even in centralized settings. Finally, we provide experiments that validate our theory.

Comments: 49 pages, 2 figures

Subjects:

Machine Learning (cs.LG); Multiagent Systems (cs.MA); Optimization and Control (math.OC)

Cite as: arXiv:2510.06141 [cs.LG]

(or arXiv:2510.06141v4 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2510.06141

arXiv-issued DOI via DataCite

Submission history

From: Aleksandar Armacki [view email] [v1] Tue, 7 Oct 2025 17:15:08 UTC (56 KB) [v2] Wed, 17 Dec 2025 19:25:12 UTC (243 KB) [v3] Thu, 5 Feb 2026 13:26:07 UTC (243 KB) [v4] Wed, 1 Apr 2026 00:14:11 UTC (246 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

announceanalysisstudy

Knowledge Map

Knowledge Map
TopicsEntitiesSource
High-probab…announceanalysisstudyrestrictarxivarXiv cs.MA

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 178 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!