Structural Segmentation of the Minimum Set Cover Problem: Exploiting Universe Decomposability for Metaheuristic Optimization
arXiv:2604.03234v1 Announce Type: new Abstract: The Minimum Set Cover Problem (MSCP) is a classical NP-hard combinatorial optimization problem with numerous applications in science and engineering. Although a wide range of exact, approximate, and metaheuristic approaches have been proposed, most methods implicitly treat MSCP instances as monolithic, overlooking potential intrinsic structural properties of the universe. In this work, we investigate the concept of \emph{universe segmentability} in the MSCP and analyze how intrinsic structural decomposition (universe segmentability) can be exploited to enhance heuristic optimization. We propose an efficient preprocessing strategy based on disjoint-set union (union--find) to detect connected components induced by element co-occurrence within s
View PDF HTML (experimental)
Abstract:The Minimum Set Cover Problem (MSCP) is a classical NP-hard combinatorial optimization problem with numerous applications in science and engineering. Although a wide range of exact, approximate, and metaheuristic approaches have been proposed, most methods implicitly treat MSCP instances as monolithic, overlooking potential intrinsic structural properties of the universe. In this work, we investigate the concept of \emph{universe segmentability} in the MSCP and analyze how intrinsic structural decomposition (universe segmentability) can be exploited to enhance heuristic optimization. We propose an efficient preprocessing strategy based on disjoint-set union (union--find) to detect connected components induced by element co-occurrence within subsets, enabling the decomposition of the original instance into independent subproblems. Each subproblem is solved using the GRASP metaheuristic, and partial solutions are combined without compromising feasibility. Extensive experiments on standard benchmark instances and large-scale synthetic datasets show that exploiting natural universe segmentation consistently improves solution quality and scalability, particularly for large and structurally decomposable instances. These gains are supported by a succinct bit-level set representation that enables efficient set operations, making the proposed approach computationally practical at scale.
Comments: Submitted to journal
Subjects:
Artificial Intelligence (cs.AI); Optimization and Control (math.OC)
Cite as: arXiv:2604.03234 [cs.AI]
(or arXiv:2604.03234v1 [cs.AI] for this version)
https://doi.org/10.48550/arXiv.2604.03234
arXiv-issued DOI via DataCite
Submission history
From: Cristobal A. Navarro [view email] [v1] Thu, 29 Jan 2026 14:02:48 UTC (1,858 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
benchmarkannounceapplication
CavMerge: Merging K-means Based on Local Log-Concavity
arXiv:2604.04302v1 Announce Type: cross Abstract: K-means clustering, a classic and widely-used clustering technique, is known to exhibit suboptimal performance when applied to non-linearly separable data. Numerous adjustments and modifications have been proposed to address this issue, including methods that merge K-means results from a relatively large K to obtain a final cluster assignment. However, existing methods of this nature often encounter computational inefficiencies and suffer from hyperparameter tuning. Here we present \emph{CavMerge}, a novel K-means merging algorithm that is intu — Zhili Qiao, Wangqian Ju, Peng Liu

High-Stakes Personalization: Rethinking LLM Customization for Individual Investor Decision-Making
arXiv:2604.04300v1 Announce Type: cross Abstract: Personalized LLM systems have advanced rapidly, yet most operate in domains where user preferences are stable and ground truth is either absent or subjective. We argue that individual investor decision-making presents a uniquely challenging domain for LLM personalization - one that exposes fundamental limitations in current customization paradigms. Drawing on our system, built and deployed for AI-augmented portfolio management, we identify four axes along which individual investing exposes fundamental limitations in standard LLM customization: — Yash Ganpat Sawant

A Family of Open Time-Series Foundation Models for the Radio Access Network
arXiv:2604.04271v1 Announce Type: cross Abstract: The Radio Access Network (RAN) is evolving into a programmable and disaggregated infrastructure that increasingly relies on AI-native algorithms for optimization and closed-loop control. However, current RAN intelligence is still largely built from task-specific models tailored to individual functions, resulting in model fragmentation, limited knowledge sharing across tasks, poor generalization, and increased system complexity. To address these limitations, we introduce TimeRAN, a unified multi-task learning framework for time-series modeling i — Ioannis Panitsas, Leandros Tassiulas
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Anthropic s refusal to arm AI is exactly why the UK wants it
The Anthropic UK expansion story is less about diplomatic courtship and more about what happens when a government punishes a company for having principles. In late February, US Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a stark ultimatum: remove guardrails preventing Claude from being used for fully autonomous weapons and domestic mass surveillance, [ ] The post Anthropic s refusal to arm AI is exactly why the UK wants it appeared first on AI News .

Enabling Deterministic User-Level Interrupts in Real-Time Processors via Hardware Extension
arXiv:2604.04015v1 Announce Type: new Abstract: The growing complexity of real-time embedded systems demands strong isolation of software components into separate protection domains to reduce attack surfaces and limit fault propagation. However, application-supplied device interrupt handlers -- even untrusted -- have to remain in the kernel to minimize interrupt latency, undermining security and burdening manual certifications. Current hardware extensions accelerate interrupts only when the target protection domain is scheduled by the kernel; consequently, they are limited to improving average-case performance but not worst-case latency, and do not meet the requirements of critical real-time applications such as autonomous vehicles or robots. To overcome this limitation, we propose a novel

Context-Binding Gaps in Stateful Zero-Knowledge Proximity Proofs: Taxonomy, Separation, and Mitigation
arXiv:2604.03900v1 Announce Type: new Abstract: A zero-knowledge proximity proof certifies geometric nearness but carries no commitment to an application context. In stateful geo-content systems, where drops can share coordinates, policies evolve, and content has persistent identity, this gap can permit proof transfer between application objects unless extra operational invariants are maintained. We present a systems-security analysis of this deployment problem: a taxonomy of context-binding vulnerabilities, a formal off-circuit verification model for a transcript-adversary that holds a recorded proof but cannot obtain fresh coordinates, an assumption comparison across five binding strategy classes, and a concrete instantiation, Zairn-ZKP, that embeds drop identity, policy version, and ses

Graduated Trust Gating for IoT Location Verification: Trading Off Detection and Proof Escalation
arXiv:2604.03896v1 Announce Type: new Abstract: IoT location services accept client-reported GPS coordinates at face value, yet spoofing is trivial with consumer-grade tools. Existing spoofing detectors output a binary decision, forcing system designers to choose between high false-deny and high false-accept rates. We propose a graduated trust gate that computes a multi-signal integrity score and maps it to three actions: PROCEED, STEP-UP, or DENY, where STEP-UP invokes a stronger verifier such as a zero-knowledge proximity proof. A session-latch mechanism ensures that a single suspicious fix blocks the entire session, preventing post-transition score recovery. Under an idealized step-up oracle on 10,000 synthetic traces, the gate enables strict thresholds (theta_p = 0.9) that a binary gat


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!