Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nailsZDNet AI🔥 ggml-org/llama.cppGitHub Trending🔥 ollama/ollamaGitHub Trending🔥 sponsors/kepanoGitHub Trending🔥 KeygraphHQ/shannonGitHub Trending🔥 sponsors/abhigyanpatwariGitHub TrendingOpenAI Releases Policy Recommendations for AI AgeBloomberg TechnologyBeware the Magical 2-Person, $1 Billion AI-Driven StartupForrester AI Blog[D] ICML 26 - What to do with the zero follow-up questionsReddit r/MachineLearningStop Writing Mega-Prompts: Use These 5 Anthropic Design Patterns InsteadMedium AIBuilding a Semantic Research Assistant: A Production RAG Pipeline Over 120 arXiv PapersMedium AIBuilding a Multi-Agent Investment PlatformMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI tested Gemini on Android Auto and now I can't stop talking to it: 5 tasks it nailsZDNet AI🔥 ggml-org/llama.cppGitHub Trending🔥 ollama/ollamaGitHub Trending🔥 sponsors/kepanoGitHub Trending🔥 KeygraphHQ/shannonGitHub Trending🔥 sponsors/abhigyanpatwariGitHub TrendingOpenAI Releases Policy Recommendations for AI AgeBloomberg TechnologyBeware the Magical 2-Person, $1 Billion AI-Driven StartupForrester AI Blog[D] ICML 26 - What to do with the zero follow-up questionsReddit r/MachineLearningStop Writing Mega-Prompts: Use These 5 Anthropic Design Patterns InsteadMedium AIBuilding a Semantic Research Assistant: A Production RAG Pipeline Over 120 arXiv PapersMedium AIBuilding a Multi-Agent Investment PlatformMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

HippoCamp: Benchmarking Contextual Agents on Personal Computers

ArXiv CS.AIby [Submitted on 1 Apr 2026]April 2, 20262 min read1 views
Source Quiz

arXiv:2604.01221v1 Announce Type: new Abstract: We present HippoCamp, a new benchmark designed to evaluate agents' capabilities on multimodal file management. Unlike existing agent benchmarks that focus on tasks like web interaction, tool use, or software automation in generic settings, HippoCamp evaluates agents in user-centric environments to model individual user profiles and search massive personal files for context-aware reasoning. Our benchmark instantiates device-scale file systems over real-world profiles spanning diverse modalities, comprising 42.4 GB of data across over 2K real-world files. Building upon the raw files, we construct 581 QA pairs to assess agents' capabilities in search, evidence perception, and multi-step reasoning. To facilitate fine-grained analysis, we provide

Authors:Zhe Yang, Shulin Tian, Kairui Hu, Shuai Liu, Hoang-Nhat Nguyen, Yichi Zhang, Zujin Guo, Mengying Yu, Zinan Zhang, Jingkang Yang, Chen Change Loy, Ziwei Liu

View PDF

Abstract:We present HippoCamp, a new benchmark designed to evaluate agents' capabilities on multimodal file management. Unlike existing agent benchmarks that focus on tasks like web interaction, tool use, or software automation in generic settings, HippoCamp evaluates agents in user-centric environments to model individual user profiles and search massive personal files for context-aware reasoning. Our benchmark instantiates device-scale file systems over real-world profiles spanning diverse modalities, comprising 42.4 GB of data across over 2K real-world files. Building upon the raw files, we construct 581 QA pairs to assess agents' capabilities in search, evidence perception, and multi-step reasoning. To facilitate fine-grained analysis, we provide 46.1K densely annotated structured trajectories for step-wise failure diagnosis. We evaluate a wide range of state-of-the-art multimodal large language models (MLLMs) and agentic methods on HippoCamp. Our comprehensive experiments reveal a significant performance gap: even the most advanced commercial models achieve only 48.3% accuracy in user profiling, struggling particularly with long-horizon retrieval and cross-modal reasoning within dense personal file systems. Furthermore, our step-wise failure diagnosis identifies multimodal perception and evidence grounding as the primary bottlenecks. Ultimately, HippoCamp exposes the critical limitations of current agents in realistic, user-centric environments and provides a robust foundation for developing next-generation personal AI assistants.

Comments: Project Page: this https URL

Subjects:

Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

Cite as: arXiv:2604.01221 [cs.AI]

(or arXiv:2604.01221v1 [cs.AI] for this version)

https://doi.org/10.48550/arXiv.2604.01221

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Shulin Tian [view email] [v1] Wed, 1 Apr 2026 17:58:33 UTC (24,493 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelbenchmark

Knowledge Map

Knowledge Map
TopicsEntitiesSource
HippoCamp: …modellanguage mo…benchmarkannounceassistantanalysisArXiv CS.AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 211 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!