Non-Signaling Locality Lower Bounds for Dominating Set
arXiv:2604.02582v1 Announce Type: new Abstract: Minimum dominating set is a basic local covering problem and a core task in distributed computing. Despite extensive study, in the classic LOCAL model there exist significant gaps between known algorithms and lower bounds. Chang and Li prove an $\Omega(\log n)$-locality lower bound for a constant factor approximation, while Kuhn--Moscibroda--Wattenhofer gave an algorithm beating this bound beyond $\log \Delta$-approximation, along with a weaker lower bound for this degree-dependent setting scaling roughly with $\min\{\log \Delta/\log\log \Delta,\sqrt{\log n/\log\log n}\}$. Unfortunately, this latter bound is weak for small $\Delta$, and never recovers the Chang--Li bound, leaving central questions: does $O(\log \Delta)$-approximation require
View PDF HTML (experimental)
Abstract:Minimum dominating set is a basic local covering problem and a core task in distributed computing. Despite extensive study, in the classic LOCAL model there exist significant gaps between known algorithms and lower bounds. Chang and Li prove an $\Omega(\log n)$-locality lower bound for a constant factor approximation, while Kuhn--Moscibroda--Wattenhofer gave an algorithm beating this bound beyond $\log \Delta$-approximation, along with a weaker lower bound for this degree-dependent setting scaling roughly with $\min{\log \Delta/\log\log \Delta,\sqrt{\log n/\log\log n}}$. Unfortunately, this latter bound is weak for small $\Delta$, and never recovers the Chang--Li bound, leaving central questions: does $O(\log \Delta)$-approximation require $\Omega(\log n)$ locality, and do such bounds extend beyond LOCAL? In this work, we take a major step toward answering these questions in the non-signaling model, which strictly subsumes the LOCAL, quantum-LOCAL, and bounded-dependence settings. We prove every $O(\log\Delta)$-approximate non-signaling distribution for dominating set requires locality $\Omega(\log n/(\log\Delta \cdot \mathrm{poly}\log\log\Delta))$. Further, we show for some $\beta \in (0,1)$, every $O(\log^\beta \Delta)$-approximate non-signaling distribution requires locality $\Omega(\log n/\log\Delta)$, which combined with the KMW bound yields a degree-independent $\Omega(\sqrt{\log n/\log\log n})$ quantum-LOCAL lower bound for $O(\log^\beta\Delta)$-approximation algorithms. The proof is based on two new low-soundness sensitivity lower bounds for label cover, one via Impagliazzo--Kabanets--Wigderson-style parallel repetition with degree reduction and one from a sensitivity-preserving reworking of the Dinur--Harsha framework, together with the reductions from label cover to set cover to dominating set and the sensitivity-to-locality transfer theorem of Fleming and Yoshida.
Subjects:
Data Structures and Algorithms (cs.DS)
Cite as: arXiv:2604.02582 [cs.DS]
(or arXiv:2604.02582v1 [cs.DS] for this version)
https://doi.org/10.48550/arXiv.2604.02582
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Yuichi Yoshida [view email] [v1] Thu, 2 Apr 2026 23:35:00 UTC (116 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncestudy
SocioEval: A Template-Based Framework for Evaluating Socioeconomic Status Bias in Foundation Models
As Large Language Models (LLMs) increasingly power decision-making systems across critical domains, understanding and mitigating their biases becomes essential for responsible AI deployment. Although bias assessment frameworks have proliferated for attributes such as race and gender, socioeconomic status bias remains significantly underexplored despite its widespread implications in the real world. We introduce SocioEval, a template-based framework for systematically evaluating socioeconomic bias in foundation models through decision-making tasks. Our hierarchical framework encompasses 8 theme — Divyanshu Kumar, Ishita Gupta, Nitin Aravind Birur

Too Polite to Disagree: Understanding Sycophancy Propagation in Multi-Agent Systems
Large language models (LLMs) often exhibit sycophancy: agreement with user stance even when it conflicts with the model's opinion. While prior work has mostly studied this in single-agent settings, it remains underexplored in collaborative multi-agent systems. We ask whether awareness of other agents' sycophancy levels influences discussion outcomes. To investigate this, we run controlled experiments with six open-source LLMs, providing agents with peer sycophancy rankings that estimate each peer's tendency toward sycophancy. These rankings are based on scores calculated using various static ( — Vira Kasprova, Amruta Parulekar, Abdulrahman AlRabah
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Revealing the Learning Dynamics of Long-Context Continual Pre-training
Existing studies on Long-Context Continual Pre-training (LCCP) mainly focus on small-scale models and limited data regimes (tens of billions of tokens). We argue that directly migrating these small-scale settings to industrial-grade models risks insufficient adaptation and premature training termination. Furthermore, current evaluation methods rely heavily on downstream benchmarks (e.g., Needle-in-a-Haystack), which often fail to reflect the intrinsic convergence state and can lead to "deceptive saturation". In this paper, we present the first systematic investigation of LCCP learning dynamics — Yupu Liang, Shuang Chen, Guanwei Zhang

Too Polite to Disagree: Understanding Sycophancy Propagation in Multi-Agent Systems
Large language models (LLMs) often exhibit sycophancy: agreement with user stance even when it conflicts with the model's opinion. While prior work has mostly studied this in single-agent settings, it remains underexplored in collaborative multi-agent systems. We ask whether awareness of other agents' sycophancy levels influences discussion outcomes. To investigate this, we run controlled experiments with six open-source LLMs, providing agents with peer sycophancy rankings that estimate each peer's tendency toward sycophancy. These rankings are based on scores calculated using various static ( — Vira Kasprova, Amruta Parulekar, Abdulrahman AlRabah

SocioEval: A Template-Based Framework for Evaluating Socioeconomic Status Bias in Foundation Models
As Large Language Models (LLMs) increasingly power decision-making systems across critical domains, understanding and mitigating their biases becomes essential for responsible AI deployment. Although bias assessment frameworks have proliferated for attributes such as race and gender, socioeconomic status bias remains significantly underexplored despite its widespread implications in the real world. We introduce SocioEval, a template-based framework for systematically evaluating socioeconomic bias in foundation models through decision-making tasks. Our hierarchical framework encompasses 8 theme — Divyanshu Kumar, Ishita Gupta, Nitin Aravind Birur

Student-in-the-Loop Chain-of-Thought Distillation via Generation-Time Selection
Large reasoning models achieve strong performance on complex tasks through long chain-of-thought (CoT) trajectories, but directly transferring such reasoning processes to smaller models remains challenging. A key difficulty is that not all teacher-generated reasoning trajectories are suitable for student learning. Existing approaches typically rely on post-hoc filtering, selecting trajectories after full generation based on heuristic criteria. However, such methods cannot control the generation process itself and may still produce reasoning paths that lie outside the student's learning capacit — Chaoqun He, Yingfa Chen, Chaojun Xiao


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!