Marco DeepResearch: Unlocking Efficient Deep Research Agents via Verification-Centric Design
A verification-centric framework for deep research agents improves performance on complex benchmarks by incorporating error checking at multiple stages of development and inference. (5 upvotes on HuggingFace)
Published on Mar 30
Authors:
,
,
,
,
,
,
Abstract
A verification-centric framework for deep research agents improves performance on complex benchmarks by incorporating error checking at multiple stages of development and inference.
AI-generated summary
Deep research agents autonomously conduct open-ended investigations, integrating complex information retrieval with multi-step reasoning across diverse sources to solve real-world problems. To sustain this capability on long-horizon tasks, reliable verification is critical during both training and inference. A major bottleneck in existing paradigms stems from the lack of explicit verification mechanisms in QA data synthesis, trajectory construction, and test-time scaling. Errors introduced at each stage propagate downstream and degrade the overall agent performance. To address this, we present Marco DeepResearch, a deep research agent optimized with a verification-centric framework design at three levels: (1)~QA Data Synthesis: We introduce verification mechanisms to graph-based and agent-based QA synthesis to control question difficulty while ensuring answers are unique and correct; (2)~Trajectory Construction: We design a verification-driven trajectory synthesis method that injects explicit verification patterns into training trajectories; and (3)~Test-time scaling: We use Marco DeepResearch itself as a verifier at inference time and effectively improve performance on challenging questions. Extensive experimental results demonstrate that our proposed Marco DeepResearch agent significantly outperforms 8B-scale deep research agents on most challenging benchmarks, such as BrowseComp and BrowseComp-ZH. Crucially, under a maximum budget of 600 tool calls, Marco DeepResearch even surpasses or approaches several 30B-scale agents, like Tongyi DeepResearch-30B.
View arXiv page View PDF Add to collection
Get this paper in your agent:
hf papers read 2603.28376
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2603.28376 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2603.28376 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.28376 in a Space README.md to link it from this page.
Collections including this paper 3
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxiv
TurboQuant, KIVI, and the Real Cost of Long-Context KV Cache
<h1> I Built a Free KV Cache Calculator for LLM Inference </h1> <p>When people talk about LLM deployment costs, they usually start with model weights.</p> <p>That makes sense, but once you push context length higher, KV cache becomes one of the real bottlenecks. In many long-context setups, it is the<br> dynamic memory cost that quietly starts dominating deployment decisions.</p> <p>I built a small free tool to make that easier to estimate:</p> <p><a href="https://turbo-quant.com/en/kv-cache-calculator" rel="noopener noreferrer">TurboQuant Tools</a></p> <p>It is a practical KV cache calculator for LLM inference. You can use it to estimate memory for:</p> <ul> <li>MHA models</li> <li>GQA models</li> <li>MQA models</li> <li>different context lengths</li> <li>different batch sizes</li> <li>di
Sweden brings urgency—and royalty—to Montréal as it seeks AI research partnership - BetaKit
<a href="https://news.google.com/rss/articles/CBMipAFBVV95cUxNaFdQTnJheUhjQWN4YzRIX082TlN6MWhTOUo1UnhmTGJ3RFp6dXNhbXphd3FlOUUtTGxwZWhrUkQtNEszcnhpYXhNNnVVT3RIU3QtNXlnZ0thb3VoSmZJcUExN0JTMURxZjFMLWhhVmM3dlhLemtiRUZ0azkwRGVkRDdpN1lBa1E4ZF9kWFpWaDdIM0ZHbGk3d241N1JkMEJUdVpSMw?oc=5" target="_blank">Sweden brings urgency—and royalty—to Montréal as it seeks AI research partnership</a> <font color="#6f6f6f">BetaKit</font>
Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models - wsj.com
<a href="https://news.google.com/rss/articles/CBMiuANBVV95cUxNclA0VzFOS1BiUGdMMWtwUVVoa2lTcFhJZndUanBDSndnNDdFeUJCWGhRWS1PN3RfenZ6MHdIRC1xWUtoMGlNR28tUTVJcDZWMm5iNXAxVXBZVUR5VkR3Z0phc3M3YjRxYkxldXBzemJ6SzhGckd6UFplVVJIRVZoazFhUVF2d3A3TFZnakhNb3NmRkRyamtyTDB5b3pyWVZqbl9KWGxNeUtMS3A2UmczN0Ric0Z3TVJXYXRLcnFBRmNVbFJiMXBUVTdER2VmaFZEZll6QWRjQUJneWJ3N2wzWl91bm9raTltelVzcm9YX0swVllVNk85V3Fxb3RHVDF0eU1WemxxN1A0YzNSZUVwM2xNSW5RcWE0UXRod1h0QTBNRVhwODRUSW13V3o4bHpGRWxQdG5JMnJ2STIwVlB5OFl2a2hUV1RSanRBVGxUWUlOUXI3eGtvQjBXYVpCb3Vqb3J4SDdVbVZYNWlkeENoM2xwQmNsOXlSMDYwZ1ZiVjNXYWIxOG9oSXJCaGQ4dlA2S3B2eEdVUVdadExpVnNtMTVlMHc1UURCakhsck5pekRDWXBSMTI4Sg?oc=5" target="_blank">Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models</a> <font color="#6f6f6f">wsj.com</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
From brain scans to alloys: Teaching AI to make sense of complex research data - Penn State University
<a href="https://news.google.com/rss/articles/CBMiwAFBVV95cUxPZDFHdkptQ2VUM2hmWjhqQkxoRnBiTWoxMXRRR21MUG5TamdUMlFRWmhvYVNHaFVNREVKU3VmSnVOdDVZYnNLb2ppYXRVRTZmVFVMV1pLTlVhUm9ybTNZbGtvZTdIMnIyMHNpOEk5aU9TSmxxS2Y4V2MwazYwY3JlX1Axbk1nd3pfcWhFdUJaaDJWRXJaMFIyTTROcmFHeXI3ZzFudXJ2M1h6UHI1LW1Ca1dta2RkM3BiYndocGk3Yjg?oc=5" target="_blank">From brain scans to alloys: Teaching AI to make sense of complex research data</a> <font color="#6f6f6f">Penn State University</font>

Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work
arXiv:2505.24246v4 Announce Type: replace Abstract: As AI systems are increasingly tested and deployed in open-ended and high-stakes domains, crowdworkers are often tasked with responsible AI (RAI) content work. These tasks include labeling violent content, moderating disturbing text, or simulating harmful behavior for red teaming exercises to shape AI system behaviors. While prior research efforts have highlighted the risks to worker well-being associated with RAI content work, far less attention has been paid to how these risks are communicated to workers by task designers or individuals who design and post RAI tasks. Existing transparency frameworks and guidelines, such as model cards, datasheets, and crowdworksheets, focus on documenting model information and dataset collection process

Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
arXiv:2505.01000v5 Announce Type: replace Abstract: Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared c

Dynamic Cogeneration of Bug Reproduction Test in Agentic Program Repair
arXiv:2601.19066v2 Announce Type: replace Abstract: Bug Reproduction Tests (BRTs) have been used in many Automated Program Repair (APR) systems, primarily for validating promising fixes and aiding fix generation. In practice, when developers submit a patch, they often implement the BRT alongside the fix. Our experience deploying agentic APR reveals that developers similarly desire a BRT within AI-generated patches to increase their confidence. However, canonical APR systems tend to generate BRTs and fixes separately, and focus on producing only the fix in the final patch. In this paper, we study agentic APR in the context of cogeneration, where the APR agent is instructed to generate both a fix and a BRT in the same patch. We evaluate the effectiveness of different cogeneration strategies
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!