Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessClaude Now Has 1 Million Token Context. Here’s What That Actually Means for Developers.Medium AIAI can write code. It just can’t maintain it — About the future of creative workMedium AIThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV CommunityUK eyes Anthropic as US rift deepens over military use of AI - FirstpostGNews AI USAI Traced a "Cute" Minecraft Phishing Site to a C2 Server in ChicagoDEV CommunityYour AI Agent Stopped Responding 2 Hours Ago. Nobody Noticed.Dev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessClaude Now Has 1 Million Token Context. Here’s What That Actually Means for Developers.Medium AIAI can write code. It just can’t maintain it — About the future of creative workMedium AIThe Discipline of Not Fooling Ourselves: Episode 4 — The Interpreters of the RulesDEV CommunityHow We Used AI Agents to Security-Audit an Open Source ProjectDEV CommunityAI chatbot traffic grows seven times faster than social media but still trails by a factor of fourThe DecoderWhy We Ditched Bedrock Agents for Nova Pro and Built a Custom OrchestratorDEV CommunityStop leaking your .env to AI! I built a Rust/Tauri Secret Manager to inject API keys safely 🛡️DEV CommunityNevaMind AI: Advanced Memory for Proactive AgentsDEV CommunityHow to Switch Industries Without Starting OverDEV CommunityUK eyes Anthropic as US rift deepens over military use of AI - FirstpostGNews AI USAI Traced a "Cute" Minecraft Phishing Site to a C2 Server in ChicagoDEV CommunityYour AI Agent Stopped Responding 2 Hours Ago. Nobody Noticed.Dev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

Automated Algorithm Design for Auto-Tuning Optimizers

arXiv cs.NEby Floris-Jan Willemsen, Niki van Stein, Ben van WerkhovenApril 1, 20261 min read0 views
Source Quiz

arXiv:2510.17899v2 Announce Type: replace-cross Abstract: Automatic performance tuning (auto-tuning) is essential for optimizing high-performance applications, where vast and irregular search spaces make manual exploration infeasible. While auto-tuners traditionally rely on classical approaches such as evolutionary, annealing, or surrogate-based optimizers, designing algorithms that efficiently find near-optimal configurations robustly across diverse tasks is challenging. We propose a new paradigm: using large language models (LLMs) to automatically generate optimization algorithms tailored to auto-tuning problems. We introduce a framework that prompts LLMs with problem descriptions and search space characteristics to synthesize, test, and iteratively refine specialized optimizers. These g

View PDF HTML (experimental)

Abstract:Automatic performance tuning (auto-tuning) is essential for optimizing high-performance applications, where vast and irregular search spaces make manual exploration infeasible. While auto-tuners traditionally rely on classical approaches such as evolutionary, annealing, or surrogate-based optimizers, designing algorithms that efficiently find near-optimal configurations robustly across diverse tasks is challenging. We propose a new paradigm: using large language models (LLMs) to automatically generate optimization algorithms tailored to auto-tuning problems. We introduce a framework that prompts LLMs with problem descriptions and search space characteristics to synthesize, test, and iteratively refine specialized optimizers. These generated algorithms are evaluated on four real-world auto-tuning applications across six hardware platforms and compared against the state-of-the-art in two contemporary auto-tuning frameworks. The evaluation demonstrates that providing additional application- and search space-specific information in the generation stage results in an average performance improvement of 30.7% and 14.6%, respectively. In addition, our results show that LLM-generated optimizers can rival, and in various cases outperform, existing human-designed algorithms, with our best-performing generated optimization algorithms achieving an average 72.4% improvement over state-of-the-art optimizers for auto-tuning.

Subjects:

Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)

Cite as: arXiv:2510.17899 [cs.LG]

(or arXiv:2510.17899v2 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2510.17899

arXiv-issued DOI via DataCite

Submission history

From: Floris-Jan Willemsen [view email] [v1] Sun, 19 Oct 2025 09:38:15 UTC (2,694 KB) [v2] Tue, 31 Mar 2026 10:10:10 UTC (2,670 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Automated A…modellanguage mo…announceapplicationplatformvaluationarXiv cs.NE

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 206 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!