High-probability Convergence Guarantees of Decentralized SGD
arXiv:2510.06141v4 Announce Type: replace-cross Abstract: Convergence in high-probability (HP) has attracted increasing interest, due to implying exponentially decaying tail bounds and strong guarantees for individual runs of an algorithm. While many works study HP guarantees in centralized settings, much less is understood in the decentralized setup, where existing works require strong assumptions, like uniformly bounded gradients, or asymptotically vanishing noise. This results in a significant gap between the assumptions used to establish convergence in the HP and the mean-squared error (MSE) sense, and is also contrary to centralized settings, where it is known that $\mathtt{SGD}$ converges in HP under the same conditions on the cost function as needed for MSE convergence. Motivated by
View PDF HTML (experimental)
Abstract:Convergence in high-probability (HP) has attracted increasing interest, due to implying exponentially decaying tail bounds and strong guarantees for individual runs of an algorithm. While many works study HP guarantees in centralized settings, much less is understood in the decentralized setup, where existing works require strong assumptions, like uniformly bounded gradients, or asymptotically vanishing noise. This results in a significant gap between the assumptions used to establish convergence in the HP and the mean-squared error (MSE) sense, and is also contrary to centralized settings, where it is known that $\mathtt{SGD}$ converges in HP under the same conditions on the cost function as needed for MSE convergence. Motivated by these observations, we study the HP convergence of Decentralized $\mathtt{SGD}$ ($\mathtt{DSGD}$) in the presence of light-tailed noise, providing several strong results. First, we show that $\mathtt{DSGD}$ converges in HP under the same conditions on the cost as in the MSE sense, removing the restrictive assumptions used in prior works. Second, our sharp analysis yields order-optimal rates for both non-convex and strongly convex costs. Third, we establish a linear speed-up in the number of users, leading to matching, or strictly better transient times than those obtained from MSE results, further underlining the tightness of our analysis. To the best of our knowledge, this is the first work that shows $\mathtt{DSGD}$ achieves a linear speed-up in the HP sense. Our relaxed assumptions and sharp rates stem from several technical results of independent interest, including a result on the variance-reduction effect of decentralized methods in the HP sense, as well as a novel bound on the MGF of strongly convex costs, which is of interest even in centralized settings. Finally, we provide experiments that validate our theory.
Comments: 49 pages, 2 figures
Subjects:
Machine Learning (cs.LG); Multiagent Systems (cs.MA); Optimization and Control (math.OC)
Cite as: arXiv:2510.06141 [cs.LG]
(or arXiv:2510.06141v4 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2510.06141
arXiv-issued DOI via DataCite
Submission history
From: Aleksandar Armacki [view email] [v1] Tue, 7 Oct 2025 17:15:08 UTC (56 KB) [v2] Wed, 17 Dec 2025 19:25:12 UTC (243 KB) [v3] Thu, 5 Feb 2026 13:26:07 UTC (243 KB) [v4] Wed, 1 Apr 2026 00:14:11 UTC (246 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announceanalysisstudy
Automate Churn Analysis and Win-Backs with AI-Powered Personalization
As a micro-SaaS founder, watching users churn feels like a slow leak you can't fix. Generic "we miss you" emails fall flat because they ignore why someone left. The real challenge is turning your user data into a personalized, automated recovery system. The Core Principle: Contextual, Not Creepy, Personalization The key is to automate emails filled with real user context from your app's data, moving beyond "Hello [Name]." This means using product-centric behavior—like feature usage or errors—to show you understand their specific situation, not their personal habits. Research consistently shows that emails leveraging behavioral triggers significantly outperform generic blasts. Tool in Action: Your own application database is the most crucial tool. By inventorying fields like Last_Error_Even

Detecting Toxic Language: Ontology and BERT-based Approaches for Bulgarian Text
arXiv:2604.01745v1 Announce Type: new Abstract: Toxic content detection in online communication remains a significant challenge, with current solutions often inadvertently blocking valuable information, including medical terms and text related to minority groups. This paper presents a more nu-anced approach to identifying toxicity in Bulgarian text while preserving access to essential information. The research explores two distinct methodologies for detecting toxic content. The developed methodologies have po-tential applications across diverse online platforms and content moderation systems. First, we propose an ontology that models the potentially toxic words in Bulgarian language. Then, we compose a dataset that comprises 4,384 manually anno-tated sentences from Bulgarian online forums
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!