Total Variation Guarantees for Sampling with Stochastic Localization
arXiv:2603.29555v1 Announce Type: new Abstract: Motivated by the success of score-based generative models, a number of diffusion-based algorithms have recently been proposed for the problem of sampling from a probability measure whose unnormalized density can be accessed. Among them, Grenioux et al. introduced SLIPS, a sampling algorithm based on Stochastic Localization. While SLIPS exhibits strong empirical performance, no rigorous convergence analysis has previously been provided. In this work, we close this gap by establishing the first guarantee for SLIPS in total variation distance. Under minimal assumptions on the target, our bound implies that the number of steps required to achieve an $\varepsilon$-guarantee scales linearly with the dimension, up to logarithmic factors. The analysi
View PDF HTML (experimental)
Abstract:Motivated by the success of score-based generative models, a number of diffusion-based algorithms have recently been proposed for the problem of sampling from a probability measure whose unnormalized density can be accessed. Among them, Grenioux et al. introduced SLIPS, a sampling algorithm based on Stochastic Localization. While SLIPS exhibits strong empirical performance, no rigorous convergence analysis has previously been provided. In this work, we close this gap by establishing the first guarantee for SLIPS in total variation distance. Under minimal assumptions on the target, our bound implies that the number of steps required to achieve an $\varepsilon$-guarantee scales linearly with the dimension, up to logarithmic factors. The analysis leverages techniques from the theory of score-based generative models and further provides theoretical insights into the empirically observed optimal choice of discretization points.
Comments: 12 pages main body, 13 pages Appendix
Subjects:
Machine Learning (cs.LG); Probability (math.PR)
MSC classes: 60J05
ACM classes: G.3
Cite as: arXiv:2603.29555 [cs.LG]
(or arXiv:2603.29555v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.29555
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Jakob Kellermann [view email] [v1] Tue, 31 Mar 2026 10:34:42 UTC (50 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceanalysis![[P] Remote sensing foundation models made easy to use.](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-data-stream-CS83dLj3ogncWS7WB2tNYE.webp)
[P] Remote sensing foundation models made easy to use.
This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://github.com/cybergis/rs-embed submitted by /u/amritk110 [link] [comments]

Microsoft just shipped the clearest signal yet that it is building an AI empire without OpenAI
Six months after renegotiating the contract that once barred it from independently pursuing frontier AI, Microsoft has released three in-house models that directly challenge the partner it spent $13 billion cultivating. MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 are now available in Microsoft Foundry, and they do not carry OpenAI’s name anywhere on the label. The models are [ ] This story continues at The Next Web
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!