A Strong Linear Programming Relaxation for Weighted Tree Augmentation
arXiv:2603.29582v1 Announce Type: new Abstract: The Weighted Tree Augmentation Problem (WTAP) is a fundamental network design problem where the goal is to find a minimum-cost set of additional edges (links) to make an input tree 2-edge-connected. While a 2-approximation is standard and the integrality gap of the classic Cut LP relaxation is known to be at least 1.5, achieving approximation factors significantly below 2 has proven challenging. Recent advances of Traub and Zenklusen using local search culminated in a ratio of $1.5+\epsilon$, establishing the state-of-the-art. In this work, we present a randomized approximation algorithm for WTAP with an approximation ratio below 1.49. Our approach is based on designing and rounding a strong linear programming relaxation for WTAP which incorp
View PDF HTML (experimental)
Abstract:The Weighted Tree Augmentation Problem (WTAP) is a fundamental network design problem where the goal is to find a minimum-cost set of additional edges (links) to make an input tree 2-edge-connected. While a 2-approximation is standard and the integrality gap of the classic Cut LP relaxation is known to be at least 1.5, achieving approximation factors significantly below 2 has proven challenging. Recent advances of Traub and Zenklusen using local search culminated in a ratio of $1.5+\epsilon$, establishing the state-of-the-art. In this work, we present a randomized approximation algorithm for WTAP with an approximation ratio below 1.49. Our approach is based on designing and rounding a strong linear programming relaxation for WTAP which incorporates variables that represent subsets of edges and the links used to cover them, inspired by lift-and-project methods like Sherali-Adams.
Comments: Full version of a paper accepted to STOC 2026
Subjects:
Data Structures and Algorithms (cs.DS)
Cite as: arXiv:2603.29582 [cs.DS]
(or arXiv:2603.29582v1 [cs.DS] for this version)
https://doi.org/10.48550/arXiv.2603.29582
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Marina Drygala [view email] [v1] Tue, 31 Mar 2026 11:03:58 UTC (96 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcearxiv
How to Publish a Power BI Report and Embed it into a Website.
Background In my last article titled ‘How Excel is Used in Real-World Data Analysis’ dated 26th March, 2026 and published through my Dev.to account, I had shared the frustrations my workmates and I were going through when end of year 2025 performance appraisal results of all employees in the department plus departmental head’s recommendations for individual employee promotion were rejected by company directors. The performance appraisal results and recommendations were rejected with one comment, “the department has not presented any dashboard to demonstrate individual employee’s productivity, improvements on performance measures and so on to justify promotions or any rewards.’ In the article which is accessible through my blog https://dev.to/mckakankato/excel-3ikf , I attempted to create s

Muri: The Root Cause of Overburden
Part 1 of this series was about recognising waste ( Muda ) and Part 2 was about how uneven flow ( Mura ) creates that waste. This final part is about the force that gives rise to both. The Japanese term Muri (無理) roughly translates to "overburden" or "unreasonable load". In the original Toyota Production System, Muri was physical: asking a worker to lift a box that was too heavy. In modern software delivery, it is the invisible pressure we put on the two load-bearing parts of any technology organisation: the people who change the system and the system they are forced to change. It's not dramatic, it's not loud and it doesn't announce itself with outages. Muri accumulates slowly and becomes the norm. And because of that, it's the most dangerous of the three. There's a well-known paper calle
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Dark Dish Lab: A Cursed Recipe Generator
What I Built Dark Dish Lab is a tiny, delightfully useless web app that generates cursed food or drink recipes. You pick: Hated ingredients Flavor chaos (salty / sweet / spicy / sour) Then it generates a short “recipe” with a horror score, a few steps, and a warning. It solves no real-world problem. It only creates regret. Demo YouTube demo Code GitHub repo How I Built It Frontend: React (Vite) Ingredient + flavor selection UI Calls backend API and renders the generated result Backend: Spring Boot (Java 17) POST /api/generate endpoint Generates a short recipe text and returns JSON Optional AI: Google Gemini API If AI is enabled and a key is provided, it asks Gemini for a very short recipe format If AI is disabled or fails, it falls back to a non-AI generator Notes Only Unicode emojis are u

KVerify: A Two-Year Journey to Get Validation Right
KVerify: A Two-Year Journey to Get Validation Right In December 2023, I wrote a small article about a utility I called ValidationBuilder . The idea was simple: a DSL where you'd call validation rules as extension functions on property references, collect all violations in one pass, and get a result back. Ktor-specific, but the concept was portable. I published it and moved on. Except I didn't. The Problem I came to Kotlin without a Java background. Spring was my first serious framework, and I didn't really understand it. My issue was specific: I was reaching for Kotlin object declarations where Spring wanted managed beans. I was hardcoding configuration values that should have been injected, not because I didn't know configuration existed, but because I couldn't figure out how to bridge Sp

CodeClone b4: from CLI tool to a real review surface for VS Code, Claude Desktop, and Codex
I already wrote about why I built CodeClone and why I cared about baseline-aware code health . Then I wrote about turning it into a read-only, budget-aware MCP server for AI agents . This post is about what changed in 2.0.0b4 . The short version: if b3 made CodeClone usable through MCP, b4 made it feel like a product. Not because I added more analysis magic or built a separate "AI mode." But because I pushed the same structural truth into the places where people and agents actually work — VS Code, Claude Desktop, Codex — and tightened the contract between all of them. A lot of developer tools are strong on analysis and weak on workflow. A lot of AI-facing tools shine in a demo and fall apart in daily use. For b4 , I wanted a tighter shape: the CLI, HTML report, MCP, and IDE clients should



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!