Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessAI #162: Visions of MythosLessWrong AIThe Fundrise Innovation Fund (VCX) Participates in OpenAI’s $122 Billion Funding Round - citybizGoogle News: OpenAIAI project ‘failure’ has little to do with AI - ComputerworldGoogle News: Generative AIAnaxi Labs Partners with Carnegie Mellon to Tackle AI's Biggest Problem: Economics - Lexington Herald LeaderGoogle News: Generative AIOpenAI’s record $122 billion round is just the start - The Business JournalsGoogle News: OpenAII wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever - The GuardianGoogle News: AIWill AI make it harder for non-graduates to climb the jobs ladder?Financial Times TechYour Enterprise Data Will Grow 10X: Are You Ready?AI YouTube Channel 35The hottest EVs from the 2026 New York Auto Show (plus one brawny concept)EngadgetDeepSource vs Snyk: Code Quality vs SecurityDEV CommunityColumn: For the Children – Artificial Intelligence brings new risks for our children - Duncan BannerGoogle News: AIWould You Want a Robot Teacher? - The New York TimesGoogle News: AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessAI #162: Visions of MythosLessWrong AIThe Fundrise Innovation Fund (VCX) Participates in OpenAI’s $122 Billion Funding Round - citybizGoogle News: OpenAIAI project ‘failure’ has little to do with AI - ComputerworldGoogle News: Generative AIAnaxi Labs Partners with Carnegie Mellon to Tackle AI's Biggest Problem: Economics - Lexington Herald LeaderGoogle News: Generative AIOpenAI’s record $122 billion round is just the start - The Business JournalsGoogle News: OpenAII wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever - The GuardianGoogle News: AIWill AI make it harder for non-graduates to climb the jobs ladder?Financial Times TechYour Enterprise Data Will Grow 10X: Are You Ready?AI YouTube Channel 35The hottest EVs from the 2026 New York Auto Show (plus one brawny concept)EngadgetDeepSource vs Snyk: Code Quality vs SecurityDEV CommunityColumn: For the Children – Artificial Intelligence brings new risks for our children - Duncan BannerGoogle News: AIWould You Want a Robot Teacher? - The New York TimesGoogle News: AI
AI NEWS HUBbyEIGENVECTOREigenvector

Finite-time analysis of Multi-timescale Stochastic Optimization Algorithms

arXiv cs.LGby Kaustubh Kartikey, Shalabh BhatnagarApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.29380v1 Announce Type: new Abstract: We present a finite-time analysis of two smoothed functional stochastic approximation algorithms for simulation-based optimization. The first is a two time-scale gradient-based method, while the second is a three time-scale Newton-based algorithm that estimates both the gradient and the Hessian of the objective function $J$. Both algorithms involve zeroth order estimates for the gradient/Hessian. Although the asymptotic convergence of these algorithms has been established in prior work, finite-time guarantees of two-timescale stochastic optimization algorithms in zeroth order settings have not been provided previously. For our Newton algorithm, we derive mean-squared error bounds for the Hessian estimator and establish a finite-time bound on

View PDF HTML (experimental)

Abstract:We present a finite-time analysis of two smoothed functional stochastic approximation algorithms for simulation-based optimization. The first is a two time-scale gradient-based method, while the second is a three time-scale Newton-based algorithm that estimates both the gradient and the Hessian of the objective function $J$. Both algorithms involve zeroth order estimates for the gradient/Hessian. Although the asymptotic convergence of these algorithms has been established in prior work, finite-time guarantees of two-timescale stochastic optimization algorithms in zeroth order settings have not been provided previously. For our Newton algorithm, we derive mean-squared error bounds for the Hessian estimator and establish a finite-time bound on $\min\limits_{0 \le m \le T} \mathbb{E}| \nabla J(\theta(m)) |^2$, showing convergence to first-order stationary points. The analysis explicitly characterizes the interaction between multiple time-scales and the propagation of estimation errors. We further identify step-size choices that balance dominant error terms and achieve near-optimal convergence rates. We also provide corresponding finite-time guarantees for the gradient algorithm under the same framework. The theoretical results are further validated through experiments on the Continuous Mountain Car environment.

Subjects:

Machine Learning (cs.LG)

Cite as: arXiv:2603.29380 [cs.LG]

(or arXiv:2603.29380v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2603.29380

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Kaustubh Kartikey [view email] [v1] Tue, 31 Mar 2026 07:50:11 UTC (67 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Finite-time…announceanalysisarxivarXiv cs.LG

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 180 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!