Transforming OPACs into Intelligent Discovery Systems: An AI-Powered, Knowledge Graph-Driven Smart OPAC for Digital Libraries
arXiv:2604.01262v1 Announce Type: cross Abstract: Traditional Online Public Access Catalogues (OPACs) are becoming less effective due to the rapid growth of scholarly literature. Conventional search methods, such as keyword indexing and Boolean queries, often fail to support efficient knowledge discovery. This paper proposes a Smart OPAC framework that transforms traditional OPACs into intelligent discovery systems using artificial intelligence and knowledge graph techniques. The framework enables semantic search, thematic filtering, and knowledge graph-based visualization to enhance user interaction and exploration. It integrates multiple open scholarly data sources and applies semantic embeddings to improve relevance and contextual understanding. The system supports exploratory search, s
View PDF
Abstract:Traditional Online Public Access Catalogues (OPACs) are becoming less effective due to the rapid growth of scholarly literature. Conventional search methods, such as keyword indexing and Boolean queries, often fail to support efficient knowledge discovery. This paper proposes a Smart OPAC framework that transforms traditional OPACs into intelligent discovery systems using artificial intelligence and knowledge graph techniques. The framework enables semantic search, thematic filtering, and knowledge graph-based visualization to enhance user interaction and exploration. It integrates multiple open scholarly data sources and applies semantic embeddings to improve relevance and contextual understanding. The system supports exploratory search, semantic navigation, and refined result filtering based on user-defined themes. Quantitative evaluation demonstrates improvements in retrieval efficiency, relevance, and reduction of information overload. The proposed approach offers practical implications for modernizing digital library services and supports next-generation research workflows. Future work includes user-centric evaluation, personalization, and dynamic knowledge graph updates.
Comments: 8 pages, 4 tables, 6 figures presented at Intellib 2026 International Conference
Subjects:
Digital Libraries (cs.DL); Artificial Intelligence (cs.AI); Information Retrieval (cs.IR)
ACM classes: H.3.3; I.2.4
Cite as: arXiv:2604.01262 [cs.DL]
(or arXiv:2604.01262v1 [cs.DL] for this version)
https://doi.org/10.48550/arXiv.2604.01262
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Rajeevan M S [view email] [v1] Wed, 1 Apr 2026 12:48:36 UTC (857 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announceupdateservice

Development and multi-center evaluation of domain-adapted speech recognition for human-AI teaming in real-world gastrointestinal endoscopy
Automatic speech recognition (ASR) is a critical interface for human-AI interaction in gastrointestinal endoscopy, yet its reliability in real-world clinical settings is limited by domain-specific terminology and complex acoustic conditions. Here, we present EndoASR, a domain-adapted ASR system designed for real-time deployment in endoscopic workflows. We develop a two-stage adaptation strategy based on synthetic endoscopy reports, targeting domain-specific language modeling and noise robustness. In retrospective evaluation across six endoscopists, EndoASR substantially improves both transcrip — Ruijie Yang, Yan Zhu, Peiyao Fu

Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework
Memory emerges as the core module in the large language model (LLM)-based agents for long-horizon complex tasks (e.g., multi-turn dialogue, game playing, scientific discovery), where memory can enable knowledge accumulation, iterative reasoning and self-evolution. A number of memory methods have been proposed in the literature. However, these methods have not been systematically and comprehensively compared under the same experimental settings. In this paper, we first summarize a unified framework that incorporates all the existing agent memory methods from a high-level perspective. We then ex — Yanchen Wu, Tenghui Lin, Yingli Zhou
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases


OpenClaw CVE-2026-33579: Unauthorized Privilege Escalation via `/pair approve` Command Fixed
CVE-2026-33579: A Critical Analysis of OpenClaw’s Authorization Collapse The recently disclosed CVE-2026-33579 vulnerability in OpenClaw represents a catastrophic failure in its authorization framework, enabling trivial full instance takeovers. At the core of this issue lies the /pair approve command—a mechanism intended for secure device registration that, due to a fundamental design flaw, bypasses critical authorization checks. This analysis dissects the vulnerability’s root cause, exploitation process, and systemic failures, underscoring the urgency of patching and the ease of attack. Root Cause: Authorization Bypass via Implicit Trust OpenClaw’s pairing system is designed to facilitate temporary, low-privilege access for device registration. The /pair approve command, however, omits ex

How to Get Gemma 4 26B Running on a Mac Mini with Ollama
So you picked up a Mac mini with the idea of running local LLMs, pulled Gemma 4 26B through Ollama, and... it either crawls at 2 tokens per second or just refuses to load. I've been there. Let me walk you through what's actually going on and how to fix it. The Problem: "Why Is This So Slow?" The Mac mini with Apple Silicon is genuinely great hardware for local inference. Unified memory means the GPU can access your full RAM pool — no separate VRAM needed. But out of the box, macOS doesn't allocate enough memory to the GPU for a 26B parameter model, and Ollama's defaults aren't tuned for your specific hardware. The result? The model either fails to load, gets killed by the OOM reaper, or runs painfully slowly because half the layers are falling back to CPU inference. Step 0: Check Your Hard


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!