Enhancing User-Feedback Driven Requirements Prioritization
arXiv:2603.28677v1 Announce Type: new Abstract: Context: Requirements prioritization is a challenging problem that is aimed to deliver the most suitable subset from a pool of candidate requirements. The problem is NP-hard when formulated as an optimization problem. Feedback from end users can offer valuable support for software evolution, and ReFeed represents a state-of-the-art in automatically inferring a requirement's priority via quantifiable properties of the feedback messages associated with a candidate requirement. Objectives: In this paper, we enhance ReFeed by shifting the focus of prioritization from treating requirements as independent entities toward interconnecting them. Additionally, we explore if interconnecting requirements provides additional value for search-based solutio
View PDF HTML (experimental)
Abstract:Context: Requirements prioritization is a challenging problem that is aimed to deliver the most suitable subset from a pool of candidate requirements. The problem is NP-hard when formulated as an optimization problem. Feedback from end users can offer valuable support for software evolution, and ReFeed represents a state-of-the-art in automatically inferring a requirement's priority via quantifiable properties of the feedback messages associated with a candidate requirement. Objectives: In this paper, we enhance ReFeed by shifting the focus of prioritization from treating requirements as independent entities toward interconnecting them. Additionally, we explore if interconnecting requirements provides additional value for search-based solutions. Methods: We leverage user feedback from mobile app store to group requirements into topically coherent clusters. Such interconnectedness, in turn, helps to auto-generate additional "requires" relations in candidate requirements. These "requires" pairs are then integrated into a search-based software engineering solution. Results: The experiments on 94 requirements prioritization instances from four real-world software applications show that our enhancement outperforms ReFeed. In addition, we illustrate how incorporating interconnectedness among requirements improves search-based solutions. Conclusion: Our findings show that requirements interconnectedness improves user feedback driven requirements prioritization, helps uncover additional "requires" relations in candidate requirements, and also strengthens search-based release planning.
Comments: Submitted to Information and Software Technology
Subjects:
Software Engineering (cs.SE)
Cite as: arXiv:2603.28677 [cs.SE]
(or arXiv:2603.28677v1 [cs.SE] for this version)
https://doi.org/10.48550/arXiv.2603.28677
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Aurek Chattopadhyay [view email] [v1] Mon, 30 Mar 2026 16:58:07 UTC (1,750 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releaseannounceapplication
Thoughts on causal isolation of AI evaluation benchmarks
AI benchmarks seem to saturate quite quickly. One sentiment I've heard a lot is that AI companies optimize their training for the most popular benchmarks. In the best case, that could mean focusing more on getting better on the topics that are benchmarked the most, which is still somewhat suboptimal as the benchmarks tend to be a proxy for the real skill and now the AI is trained for the proxy. In the worst case, the AI training is iterated directly against the benchmark, causing overfitting and good benchmark results. And avoiding this completely is not that easy. The training dataset is essentially the whole internet. When someone publishes a benchmark, the training set includes that. And people post benchmark solutions online too; those will be in the training data as well. Filtering al
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Springing into AI: PyTorch Conference Europe and ICLR 2026
Article URL: https://www.collabora.com/news-and-blog/news-and-events/springing-into-ai-pytorch-conference-europe-and-iclr-2026.html Comments URL: https://news.ycombinator.com/item?id=47619120 Points: 2 # Comments: 0

Vector researchers presenting more than 98 papers at NeurIPS 2024
Leading researchers from Vector are presenting groundbreaking research at this year s Conference on Neural Information Processing Systems (NeurIPS). The conference, taking place December 10-15 in Vancouver and online, showcases innovative [ ] The post Vector researchers presenting more than 98 papers at NeurIPS 2024 appeared first on Vector Institute for Artificial Intelligence .





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!