STCALIR: Semi-Synthetic Test Collection for Algerian Legal Information Retrieval
arXiv:2604.00731v1 Announce Type: new Abstract: Test collections are essential for evaluating retrieval and re-ranking models. However, constructing such collections is challenging due to the high cost of manual annotation, particularly in specialized domains like Algerian legal texts, where high-quality corpora and relevance judgments are scarce. To address this limitation, we propose STCALIR, a framework for generating semi-synthetic test collections directly from raw legal documents. The pipeline follows the Cranfield paradigm, maintaining its core components of topics, corpus, and relevance judgments, while significantly reducing manual effort through automated multi-stage retrieval and filtering, achieving a 99% reduction in annotation workload. We validate STCALIR using the Mr. TyDi
View PDF HTML (experimental)
Abstract:Test collections are essential for evaluating retrieval and re-ranking models. However, constructing such collections is challenging due to the high cost of manual annotation, particularly in specialized domains like Algerian legal texts, where high-quality corpora and relevance judgments are scarce. To address this limitation, we propose STCALIR, a framework for generating semi-synthetic test collections directly from raw legal documents. The pipeline follows the Cranfield paradigm, maintaining its core components of topics, corpus, and relevance judgments, while significantly reducing manual effort through automated multi-stage retrieval and filtering, achieving a 99% reduction in annotation workload. We validate STCALIR using the Mr. TyDi benchmark, demonstrating that the resulting semi-synthetic relevance judgments yield retrieval effectiveness comparable to human-annotated evaluations (Hit@10 \approx 0.785). Furthermore, system-level rankings derived from these labels exhibit strong concordance with human-based evaluations, as measured by Kendall's {\tau} (0.89) and Spearman's \r{ho} (0.92). Overall, STCALIR offers a reproducible and cost-efficient solution for constructing reliable test collections in low-resource legal domains.
Subjects:
Information Retrieval (cs.IR)
Cite as: arXiv:2604.00731 [cs.IR]
(or arXiv:2604.00731v1 [cs.IR] for this version)
https://doi.org/10.48550/arXiv.2604.00731
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: M'hamed Amine Hatem [view email] [v1] Wed, 1 Apr 2026 10:50:28 UTC (4,222 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkannounceLux-tts Model by Fal-ai: Here's What to Know
lux-tts is a voice cloning text-to-speech model that creates natural-sounding speech at 48kHz audio quality from text and a reference voice sample. The model uses a distilled 4-step architecture for fast inference, making it practical for real-time applications. Read All
Why I Used CBT Principles to Design an AI That Breaks Tasks Into Micro-Steps
Cognitive behavioral therapy and large language models might be the key to solving ADHD task paralysis. Most productivity software makes a core assumption: the user can look at a task and start working on it. Read All
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Why I Used CBT Principles to Design an AI That Breaks Tasks Into Micro-Steps
Cognitive behavioral therapy and large language models might be the key to solving ADHD task paralysis. Most productivity software makes a core assumption: the user can look at a task and start working on it. Read All
OpenAI, Anthropic eye new AI safety solution - News.az
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTFBYZHhJbkZVT1RsbWYtdEptemlxZ0tjbzBHVWlZcXJwQWdGTkZ3c0RITTQ0MzVaOEdGc3U3QnFBaUllMUllN1lsT0FWQnE4X0hxSWotU2Q3SjNRb1hEZHlXWVBjTTg4VmJyVnpuUzhjbVZGdw?oc=5" target="_blank">OpenAI, Anthropic eye new AI safety solution</a> <font color="#6f6f6f">News.az</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!