Calibrated Fusion for Heterogeneous Graph-Vector Retrieval in Multi-Hop QA
arXiv:2603.28886v1 Announce Type: new Abstract: Graph-augmented retrieval combines dense similarity with graph-based relevance signals such as Personalized PageRank (PPR), but these scores have different distributions and are not directly comparable. We study this as a score calibration problem for heterogeneous retrieval fusion in multi-hop question answering. Our method, PhaseGraph, maps vector and graph scores to a common unit-free scale using percentile-rank normalization (PIT) before fusion, enabling stable combination without discarding magnitude information. Across MuSiQue and 2WikiMultiHopQA, calibrated fusion improves held-out last-hop retrieval on HippoRAG2-style benchmarks: LastHop@5 increases from 75.1% to 76.5% on MuSiQue (8W/1L, p=0.039) and from 51.7% to 53.6% on 2WikiMultiH
View PDF HTML (experimental)
Abstract:Graph-augmented retrieval combines dense similarity with graph-based relevance signals such as Personalized PageRank (PPR), but these scores have different distributions and are not directly comparable. We study this as a score calibration problem for heterogeneous retrieval fusion in multi-hop question answering. Our method, PhaseGraph, maps vector and graph scores to a common unit-free scale using percentile-rank normalization (PIT) before fusion, enabling stable combination without discarding magnitude information. Across MuSiQue and 2WikiMultiHopQA, calibrated fusion improves held-out last-hop retrieval on HippoRAG2-style benchmarks: LastHop@5 increases from 75.1% to 76.5% on MuSiQue (8W/1L, p=0.039) and from 51.7% to 53.6% on 2WikiMultiHopQA (11W/2L, p=0.023), both on independent held-out test splits. A theory-driven ablation shows that percentile-based calibration is directionally more robust than min-max normalization on both tune and test splits (1W/6L, p=0.125), while Boltzmann weighting performs comparably to linear fusion after calibration (0W/3L, p=0.25). These results suggest that score commensuration is a robust design choice, and the exact post-calibration operator appears to matter less on these benchmarks.
Comments: 10 pages, 5 figures
Subjects:
Information Retrieval (cs.IR); Machine Learning (cs.LG)
ACM classes: H.3.3
Cite as: arXiv:2603.28886 [cs.IR]
(or arXiv:2603.28886v1 [cs.IR] for this version)
https://doi.org/10.48550/arXiv.2603.28886
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Andre Bacellar [view email] [v1] Mon, 30 Mar 2026 18:13:01 UTC (123 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
benchmarkannouncestudyNarrow AI tuning can trigger harmful behavior, Warsaw study finds - Science in Poland
<a href="https://news.google.com/rss/articles/CBMivAFBVV95cUxNdFV1WkZXLVVVVDdGOWtQRUM2THotSXRReFNXSEJEczVZTkRJWFFlRmZQcXNWaTItT1k3WVpsRzJyaXJOVmNiTEtyRHozNURKZHNhRHBTbWx6aVNaTnc5LTFxR3FyNTNuWVdONFVFYkN3M3Q1N2pyYUkydG12T1RPXzZicUFSRDVkM3hKSnhWeEdIZUpXaUVPc1NKZExMdlNlQm83cUp1OWxvMVI3Wkg0TUQzY0RoQXdVcDEwMw?oc=5" target="_blank">Narrow AI tuning can trigger harmful behavior, Warsaw study finds</a> <font color="#6f6f6f">Science in Poland</font>
Introducing Anti-Moral Realism
This post was written as part of Doublehaven : ◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇ ◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇ Moral realism is roughly the belief in stance-independent reasons for doing something, that all minds might follow. It’s also roughly the belief that there is a true morality, which humans might converge on through reasoning. It’s also roughly the belief that moral facts are the same kind of thing as physical facts. It has quite a lot of problems, for one, how do we know which facts are moral facts, and give us “ought” while most facts only give us “is”? What if the objective real morality is something horrible? What even is a “stance independent reason for doing something” and how can it square with the fact that each utility function can have its sign flipped? What is “moral reasoning?” Then again, moral a

TurboQuant, KIVI, and the Real Cost of Long-Context KV Cache
<h1> I Built a Free KV Cache Calculator for LLM Inference </h1> <p>When people talk about LLM deployment costs, they usually start with model weights.</p> <p>That makes sense, but once you push context length higher, KV cache becomes one of the real bottlenecks. In many long-context setups, it is the<br> dynamic memory cost that quietly starts dominating deployment decisions.</p> <p>I built a small free tool to make that easier to estimate:</p> <p><a href="https://turbo-quant.com/en/kv-cache-calculator" rel="noopener noreferrer">TurboQuant Tools</a></p> <p>It is a practical KV cache calculator for LLM inference. You can use it to estimate memory for:</p> <ul> <li>MHA models</li> <li>GQA models</li> <li>MQA models</li> <li>different context lengths</li> <li>different batch sizes</li> <li>di
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
AI Inspires New Research Topics In Materials Science - miragenews.com
<a href="https://news.google.com/rss/articles/CBMihwFBVV95cUxQRlVFdkRBaHRvYkJJdFRlMTZmajEzeFRPU0hGWWdfbi02V1FnTUdVQ2pmY2VZLUV2NlB4V3BFdEVlSVZkUlhRSTZaNWFKMmcyWXJYbnNqbUhMTmp0NnFtMEppOXlPZkJSNHJfck5VSEVYcmUtX1k2QkJlR1BvUEdTTkp3UmlYRkk?oc=5" target="_blank">AI Inspires New Research Topics In Materials Science</a> <font color="#6f6f6f">miragenews.com</font>
From brain scans to alloys: Teaching AI to make sense of complex research data - Penn State University
<a href="https://news.google.com/rss/articles/CBMiwAFBVV95cUxPZDFHdkptQ2VUM2hmWjhqQkxoRnBiTWoxMXRRR21MUG5TamdUMlFRWmhvYVNHaFVNREVKU3VmSnVOdDVZYnNLb2ppYXRVRTZmVFVMV1pLTlVhUm9ybTNZbGtvZTdIMnIyMHNpOEk5aU9TSmxxS2Y4V2MwazYwY3JlX1Axbk1nd3pfcWhFdUJaaDJWRXJaMFIyTTROcmFHeXI3ZzFudXJ2M1h6UHI1LW1Ca1dta2RkM3BiYndocGk3Yjg?oc=5" target="_blank">From brain scans to alloys: Teaching AI to make sense of complex research data</a> <font color="#6f6f6f">Penn State University</font>

Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work
arXiv:2505.24246v4 Announce Type: replace Abstract: As AI systems are increasingly tested and deployed in open-ended and high-stakes domains, crowdworkers are often tasked with responsible AI (RAI) content work. These tasks include labeling violent content, moderating disturbing text, or simulating harmful behavior for red teaming exercises to shape AI system behaviors. While prior research efforts have highlighted the risks to worker well-being associated with RAI content work, far less attention has been paid to how these risks are communicated to workers by task designers or individuals who design and post RAI tasks. Existing transparency frameworks and guidelines, such as model cards, datasheets, and crowdworksheets, focus on documenting model information and dataset collection process

Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
arXiv:2505.01000v5 Announce Type: replace Abstract: Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared c
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!