Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessOpenAI officially confirms mega-funding round and ChatGPT super appThe DecoderAnnouncing Doublehaven with Reflections on HumourLessWrong AIOpenAI’s new $122B funding, 'superapp'The Rundown AIKey nonprofit pitches tech giants to pay $100M each for AI safety effort - PoliticoGoogle News: AI SafetyHow a Monorepo Keeps Multiple Projects in Sync - From Shared Code to Atomic DeploymentsDEV CommunityStep‑by‑Step Guide: Generate PowerPoint Slides Using Copilot Studio AgentDEV CommunitySecuring the Agentic Frontier: Why Your AI Agents Need a "Citadel" 🏰DEV CommunityClaude Code's Leaked Source: A Real-World Masterclass in Harness EngineeringDEV CommunityI Built an AI PPT Maker and Resume Builder WebsiteDEV CommunityHDF5 vs. TsFile: Efficient Time-Series Data StorageDEV CommunityFinnish neurowellness startup Audicin raises $1.9MThe Next Web NeuralThere Is No Such Thing As a ServiceDEV CommunityBlack Hat USADark ReadingBlack Hat AsiaAI BusinessOpenAI officially confirms mega-funding round and ChatGPT super appThe DecoderAnnouncing Doublehaven with Reflections on HumourLessWrong AIOpenAI’s new $122B funding, 'superapp'The Rundown AIKey nonprofit pitches tech giants to pay $100M each for AI safety effort - PoliticoGoogle News: AI SafetyHow a Monorepo Keeps Multiple Projects in Sync - From Shared Code to Atomic DeploymentsDEV CommunityStep‑by‑Step Guide: Generate PowerPoint Slides Using Copilot Studio AgentDEV CommunitySecuring the Agentic Frontier: Why Your AI Agents Need a "Citadel" 🏰DEV CommunityClaude Code's Leaked Source: A Real-World Masterclass in Harness EngineeringDEV CommunityI Built an AI PPT Maker and Resume Builder WebsiteDEV CommunityHDF5 vs. TsFile: Efficient Time-Series Data StorageDEV CommunityFinnish neurowellness startup Audicin raises $1.9MThe Next Web NeuralThere Is No Such Thing As a ServiceDEV Community

WAter: A Workload-Adaptive Knob Tuning System based on Workload Compression

arXiv cs.DBby Yibo Wang, Jiale Lao, Chen Zhang, Cehua Yang, Jianguo Wang, Mingjie TangApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.28809v1 Announce Type: new Abstract: Selecting appropriate values for the configurable parameters of Database Management Systems (DBMS) to improve performance is a significant challenge. Recent machine learning (ML)-based tuning systems have shown strong potential, but their practical adoption is often limited by the high tuning cost. This cost arises from two main factors: (1) the system needs to evaluate a large number of configurations to identify a satisfactory one, and (2) for each configuration, the system must execute the entire target workload on the DBMS, which is both time-consuming. Existing studies have primarily addressed the first factor by improving sample efficiency, that is, by reducing the number of configurations evaluated. However, the second factor, improvin

View PDF HTML (experimental)

Abstract:Selecting appropriate values for the configurable parameters of Database Management Systems (DBMS) to improve performance is a significant challenge. Recent machine learning (ML)-based tuning systems have shown strong potential, but their practical adoption is often limited by the high tuning cost. This cost arises from two main factors: (1) the system needs to evaluate a large number of configurations to identify a satisfactory one, and (2) for each configuration, the system must execute the entire target workload on the DBMS, which is both time-consuming. Existing studies have primarily addressed the first factor by improving sample efficiency, that is, by reducing the number of configurations evaluated. However, the second factor, improving runtime efficiency by reducing the time required for each evaluation, has received limited attention and remains an underexplored direction. We develop WAter, a runtime-efficient and workload-adaptive tuning system that finds near-optimal configurations at a fraction of the tuning cost compared with state-of-the-art methods. We divide the tuning process into multiple time slices and evaluate only a small subset of queries from the workload in each slice. Different subsets are evaluated across slices, and a runtime profile is used to dynamically identify more representative subsets for evaluation in subsequent slices. At the end of each time slice, the most promising configurations are evaluated on the original workload to measure their actual performance. Evaluations demonstrate that WAter identifies the best-performing configurations with up to 73.5% less tuning time and achieves up to 16.2% higher performance than the best-performing alternative.

Subjects:

Databases (cs.DB); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

Cite as: arXiv:2603.28809 [cs.DB]

(or arXiv:2603.28809v1 [cs.DB] for this version)

https://doi.org/10.48550/arXiv.2603.28809

arXiv-issued DOI via DataCite

Submission history

From: Yibo Wang [view email] [v1] Sat, 28 Mar 2026 06:00:53 UTC (634 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

announcevaluationarxiv

Knowledge Map

Knowledge Map
TopicsEntitiesSource
WAter: A Wo…announcevaluationarxivarXiv cs.DB

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 227 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Releases