The Future of AI Software Development is Agentic
Today in New York, our flagship MongoDB.local event is bringing together thousands of developers and tech leaders to discuss the future of building with MongoDB. Among the many exciting innovations and product announcements shared during the event, one theme has stood out: empowering developers to reliably build with AI and create AI solutions at scale on MongoDB. This post will explore how these advancements are set to accelerate developer productivity in the AI era. Ship faster with the MongoDB MCP Server Software development is rapidly evolving with AI tools powered by large language models (LLMs). From AI-driven editors like VS Code with GitHub Copilot and Windsurf, to terminal-based coding agents like Claude Code, these tools are transforming how developers work. While these tools bri
Today in New York, our flagship MongoDB.local event is bringing together thousands of developers and tech leaders to discuss the future of building with MongoDB. Among the many exciting innovations and product announcements shared during the event, one theme has stood out: empowering developers to reliably build with AI and create AI solutions at scale on MongoDB. This post will explore how these advancements are set to accelerate developer productivity in the AI era.
Ship faster with the MongoDB MCP Server
Software development is rapidly evolving with AI tools powered by large language models (LLMs). From AI-driven editors like VS Code with GitHub Copilot and Windsurf, to terminal-based coding agents like Claude Code, these tools are transforming how developers work. While these tools bring tremendous productivity gains already, coding agents are still limited by the context they have. Since databases hold the core of most application-related data, access to configuration details, schemas, and sample data from databases is essential for generating accurate code and optimized queries.
With Anthropic’s introduction of the Model Context Protocol (MCP) in November 2024, a new way emerged to connect AI agents with data sources and services. Database connection and interaction quickly became one of the most popular use cases for MCP in agentic coding.
Today, we’re excited to announce the general availability (GA) of the MongoDB MCP Server, giving AI assistants and agents access to the context they need to explore, manage, and generate better code with MongoDB. Building on our public preview used by thousands of developers, the GA release introduces key capabilities to strengthen production readiness:
- Enterprise-grade authentication (OIDC, LDAP, Kerberos) and proxy connectivity.
- Self-hosted remote deployment support, enabling shared deployments across teams, streamlined setup, and centralized configuration. Note that we recommend following security best practices, such as implementing authentication for remote deployments.
- Accessible as a bundle with the MongoDB for VS Code extension, it delivers a complete experience: visually explore your database with the extension or interact with the same connection through your AI assistant, all without switching context.
Figure 1. Overview of the MongoDB MCP Server.
Meeting developers where they are with n8n and CrewAI integrations
AI is transforming how developers build with MongoDB, not just in coding workflows, but also in creating AI applications and agents. From retrieval-augmented generation (RAG) to powering agent memory, these systems demand a database that can handle diverse data types—such as unstructured text (e.g., messages, code, documents), vectors, and graphs—all while supporting comprehensive retrieval mechanisms at scale like vector and hybrid search. MongoDB delivers this in a single, unified platform: the flexible document model supports the varied data agents need to store, while advanced, natively integrated search capabilities eliminate the need for separate vector databases. With Voyage AI by MongoDB providing state-of-the-art embedding models and rerankers, developers get a complete foundation for building intelligent agents without added infrastructure complexity.
As part of our commitment to making MongoDB as easy to use as possible, we’re excited to announce new integrations with n8n and CrewAI.
n8n has emerged as one of the most popular platforms for building AI solutions, thanks to its visual interface and out-of-the-box components that make it simple and accessible to create reliable AI workflows. This integration adds official support for MongoDB Atlas Vector Search, enabling developers to build RAG and agentic RAG systems through a flexible, visual interface. It also introduces an agent chat memory node for n8n agents, allowing conversations to persist by storing message history in MongoDB.
Figure 2. Example workflow with n8n and MongoDB powering an AI agent.
Meanwhile, CrewAI—a fast-growing open-source framework for building and orchestrating AI agents—makes multi-agent collaboration more accessible to developers. As AI agents take on increasingly complex and productive workflows such as online research, report writing, and enterprise document analysis, multiple specialized agents need to interact and delegate tasks with each other effectively. CrewAI provides an easy and approachable way to build such multi-agent systems. Our official integration adds support for MongoDB Atlas Vector Search, empowering developers to build agents that leverage RAG at scale. Learn how to implement agentic RAG with MongoDB Atlas and CrewAI.
The future is agentic
AI is fundamentally reshaping the entire software development lifecycle, including for developers building with MongoDB. New technology like the MongoDB MCP Server is paving the way for database-aware agentic coding, representing the future of software development. At the same time, we’re committed to meeting developers where they are: integrating our capabilities into their favorite frameworks and tools so they can benefit from MongoDB’s reliability and scalability to build AI apps and agents with ease.
megaphone
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodellanguage modelDesktop Nightly v2.2.0-nightly.202604030631
🌙 Nightly Build — v2.2.0-nightly.202604030631 Automated nightly build from main branch. ⚠️ Important Notes This is an automated nightly build and is NOT intended for production use. Nightly builds are generated from the latest main branch and may contain unstable, untested, or incomplete features . No guarantees are made regarding stability, data integrity, or backward compatibility. Bugs, crashes, and breaking changes are expected. Use at your own risk. Do NOT report bugs from nightly builds unless you can reproduce them on the latest beta or stable release. Nightly builds may have different update channels — they will not auto-update to/from stable or beta versions. It is strongly recommended to back up your data before using a nightly build. 📦 Installation Download the appropriate ins

Software-update - Tribler 8.4.2
Versie 8.4.2 van Tribler is uitgekomen, de eerste stabiele uitgave in de 8.4-reeks. Tribler is een opensource p2p-client, die ooit ontwikkeld is door studenten van de TU Delft en de VU Amsterdam. Tegenwoordig werkt een internationaal team wetenschappers uit meer dan twintig organisaties samen aan dit project. Tribler heeft onder meer een ingebouwde mediaspeler en er kan vaak direct worden gekeken of geluisterd wanneer een download wordt gestart. Verder kunnen er tokens worden verdiend door te seeden, die weer kunnen worden omgezet in andere valuta. Het programma is beschikbaar voor Windows, Linux en macOS. Deze releasenotes voor deze uitgave kunnen hieronder worden gevonden. Tribler v8.4.2

Why LLM Inference Slows Down with Longer Contexts
A systems-level view of how long contexts shift LLM inference from compute-bound to memory-bound You send a prompt to an LLM, and at first everything feels fast. Short prompts return almost instantly, and even moderately long inputs do not seem to cause any noticeable delay. The system appears stable, predictable, almost indifferent to the amount of text you provide. But this does not scale the way you might expect. As the prompt grows longer, latency does increase. But more importantly, the system itself starts behaving differently. What makes this interesting is that nothing external has changed. The model and hardware is same. But the workload is not. As sequence length grows, the way computation is structured changes. The amount of data the model needs to access changes. And the balanc
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Desktop Nightly v2.2.0-nightly.202604030631
🌙 Nightly Build — v2.2.0-nightly.202604030631 Automated nightly build from main branch. ⚠️ Important Notes This is an automated nightly build and is NOT intended for production use. Nightly builds are generated from the latest main branch and may contain unstable, untested, or incomplete features . No guarantees are made regarding stability, data integrity, or backward compatibility. Bugs, crashes, and breaking changes are expected. Use at your own risk. Do NOT report bugs from nightly builds unless you can reproduce them on the latest beta or stable release. Nightly builds may have different update channels — they will not auto-update to/from stable or beta versions. It is strongly recommended to back up your data before using a nightly build. 📦 Installation Download the appropriate ins

Inside Claude Code’s Leaked Source: What 512,000 Lines Tell Us About Building AI Agents
TL;DR On March 31, 2026, Anthropic accidentally published a 59.8 MB JavaScript source map file in version 2.1.88 of their @anthropic-ai/claude-code npm package, exposing the entire ~512,000-line TypeScript codebase. The root cause was a missing *.map exclusion in their publish configuration the bundler generates source maps by default, and no publish-time gate caught it before it went live. The leaked code reveals a product significantly more ambitious than its public surface: always-on background agents, 30-minute remote planning sessions, a Tamagotchi companion, and a multi-agent swarm orchestration system. The incident coincided with a supply-chain attack on the axios package during the same deployment window, compounding the blast radius for teams running npm install that morning. Read

Running Disaggregated LLM Inference on IBM Fusion HCI
Prefill–Decode Separation, KV Cache Affinity, and What the Metrics Show Getting an LLM to respond is straightforward. Getting it to respond consistently at scale, with observable performance, that’s where most deployments run into trouble. Traditional LLM deployments often struggle with scaling inefficiencies, high latency, and limited visibility into where time is spent during inference. Red Hat OpenShift AI 3.0 introduces a new inference architecture built around llm-d (LLM Disaggregated Inference), which separates the Prefill and Decode phases of LLM inference into independently scalable pod pools. This approach addresses key challenges by isolating compute-heavy and memory-bound workloads, improving KV cache reuse across requests, and enabling fine-grained observability into each stage

Microsoft Agent Framework Just Changed in a Big Way — Here’s What Developers Need to Know
Source: Image by Microsoft If you have been building with the earlier beta versions of Microsoft’s Agent Framework, take a deep breath. The new 1.0.0 release isn’t just a small cleanup or a few bug fixes — it is a massive architectural shift. At first glance, the headline feature looks like FoundryAgent, and yes, that is one of the biggest day-to-day improvements. But the deeper story is larger: Microsoft has reworked the framework around provider-leading client design . They’ve extracted OpenAI provider code out of the core package, standardized naming, unified model configuration, and modernized the entire workflow and streaming API stack. The New Architecture: A Leaner Core In the earlier model, agent-framework-core carried OpenAI and Azure-specific implementations together with all the


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!