Defense Innovation Unit Issues Success Memo to Fiddler AI
Discover how Fiddler AI Observability helps the Department of Defense to deploy ML and LLM models safely and reliably at scale, while ensuring responsible AI.
We are honored to share that the Department of Defense (DoD) Defense Innovation Unit (DIU) has awarded Fiddler with a Success Memo for completing the Automated Machine Learning for Mine Countermeasures Operations (AMMO) MLOps prototype with the U.S. Navy.
The DIU in collaboration with the Navy posted Machine Learning Operations (MLOps) as an area of interest on May 31, 2022 and selected Fiddler as one of the awardees of the prototype proposal.
The goal of the AMMO prototype is to build an MLOps pipeline toolset capability to rapidly retrain the Automatic Target Recognition (ATR) ML models powering the U.S. Navy’s mine countermeasures (MCM) application, thereby empowering the Navy to quickly adapt to changing undersea threats in diverse theaters.
In addition, the Fiddler AI Observability platform for federal agencies was recently deemed Awardable by the DoD Chief Digital and Artificial Intelligence Office (CDAO)’s Tradewind Solutions Marketplace to accelerate the procurement and adoption of AI/ML, data, and analytics at the DoD.
The following key capabilities of Fiddler have been validated and are leveraged in the AMMO workflow:
- Image Explainability: Enterprise grade on-demand explainability to help model developers and in the future, mission operators, understand and trust the decisions of classification and object detection models
- Image Monitoring: Patented data drift monitoring of image embeddings integrated with visual debugging of these embeddings using UMAP. This helps identify operational changes in the AI model’s behavioral and perform root cause analysis of any issues quickly
These capabilities were also validated in a live demonstration of the AMMO prototype which showed a 97% decrease in the time needed to update the ATR models with this MLOps toolset.
The Success Memo followed by the production migration of Fiddler along with the remaining vendors in the AMMO MLOps toolset represents a substantial step forward in the Navy’s MCM capabilities in subsea seabed warfare. While AMMO shows significant benefits in this use case, the methodology and implementation is expected to seamlessly extend to other use cases, sensor types, and modalities across the DoD. This allows the DoD to confidently deploy ML and subsequently LLM models safely and reliably at scale. With AI policy top of mind following the Presidential Executive Order on Trustworthy Use of AI, Fiddler’s AI Observability platform also enables Federal teams to meet the outlined transparency mandates.
“IQT first invested in Fiddler in 2019 because we believed that explainability and transparency would be essential to the responsible adoption of Artificial Intelligence across the U.S. government. Fiddler’s recent success with Project AMMO is a clear demonstration of how AI can and should be built to support our national security in line with the President’s Executive Order on safe, secure, and trustworthy AI.”— A.J. Bertone, Managing Partner, InQTel
"UUVs are critical to US defense strategy, particularly in the Indo-Pacific region, providing critical intelligence capabilities. Working with Fiddler AI and other partners, we've crafted an AI model development and deployment tech stack that enables operational effectiveness."— Joel Meyer, President of Public Sector, Domino
“Project AMMO brings together Fiddler's Observability platform with Latent AI's Tactical Edge Optimization platform to enable more comprehensive end-to-end AI/ML solutions for the DoD.”— Jags Kandasamy, CEO, Latent AI
Project AMMO highlights the DIU, and DoD’s commitment to bringing next generation AI and ML tooling to enable rapid deployment of responsible AI.
For more information about AI monitoring and explainability for Government, contact our AI experts.
The AMMO and UGIS technology demonstration teams
Subscribe to our newsletter
Monthly curated AI content, Fiddler updates, and more.
Fiddler AI Blog
https://www.fiddler.ai/blog/defense-innovation-unit-issues-success-memo-to-fiddler-aiSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
model
From Physics to Surrogate Intelligence: A Unified Electro-Thermo-Optimization Framework for TSV Networks
arXiv:2603.29268v1 Announce Type: new Abstract: High-density through-substrate vias (TSVs) enable 2.5D/3D heterogeneous integration but introduce significant signal-integrity and thermal-reliability challenges due to electrical coupling, insertion loss, and self-heating. Conventional full-wave finite-element method (FEM) simulations provide high accuracy but become computationally prohibitive for large design-space exploration. This work presents a scalable electro-thermal modeling and optimization framework that combines physics-informed analytical modeling, graph neural network (GNN) surrogates, and full-wave sign-off validation. A multi-conductor analytical model computes broadband S-parameters and effective anisotropic thermal conductivities of TSV arrays, achieving $5\%-10\%$ relative

M2H-MX: Multi-Task Dense Visual Perception for Real-Time Monocular Spatial Understanding
arXiv:2603.29236v1 Announce Type: new Abstract: Monocular cameras are attractive for robotic perception due to their low cost and ease of deployment, yet achieving reliable real-time spatial understanding from a single image stream remains challenging. While recent multi-task dense prediction models have improved per-pixel depth and semantic estimation, translating these advances into stable monocular mapping systems is still non-trivial. This paper presents M2H-MX, a real-time multi-task perception model for monocular spatial understanding. The model preserves multi-scale feature representations while introducing register-gated global context and controlled cross-task interaction in a lightweight decoder, enabling depth and semantic predictions to reinforce each other under strict latency

MemFactory: Unified Inference & Training Framework for Agent Memory
arXiv:2603.29493v1 Announce Type: new Abstract: Memory-augmented Large Language Models (LLMs) are essential for developing capable, long-term AI agents. Recently, applying Reinforcement Learning (RL) to optimize memory operations, such as extraction, updating, and retrieval, has emerged as a highly promising research direction. However, existing implementations remain highly fragmented and task-specific, lacking a unified infrastructure to streamline the integration, training, and evaluation of these complex pipelines. To address this gap, we present MemFactory, the first unified, highly modular training and inference framework specifically designed for memory-augmented agents. Inspired by the success of unified fine-tuning frameworks like LLaMA-Factory, MemFactory abstracts the memory lif
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Tunisia: President accuses artificial intelligence of ‘conspiring’ against humans - Middle East Monitor
<a href="https://news.google.com/rss/articles/CBMivwFBVV95cUxPa1lNZm5YQXhuQ0xUaGlKSVM0ekNCMnk2WS1rUGprV3VQLXpldWw1a2RpR2VmQUpueDgzNlc1Y2h6WnZIRXlQLW1mQmxKTmJZNmMtalVTdEhFMjIwWWJBSmRrc25oc1h4T3d1Y01CWVV2bTBQa2dYSzBjMUpMcC1BWFRxUzkwQ2g3XzJ3dHdYWTM1T1JYUVF2eWY3VzY1b01qY2NubVRfanQ1a3VqX3FyQXRDZHlrbXB5M0owMGdOd9IBxAFBVV95cUxNMndIZE5KelVCUWh5d3JLZDluaWx4Wk1jZTNvS0RQVDVLdE1rbDJRbTllMTNlaEozMGIwOVBhdWFSenNWM1VjOTc5LVAyS0ZUMXhITG1iSy1vUURJZzBVOGtTdFVFc2tfSHdfa2tIQlNHaXUtYnMyUVRwdjYyZUN2X2E2OWVodWtIN01MVVNodWhobXBkSnJfdnIwcGVFSzBldkVBcHJEUnFNekg4SjdKMVBqN2RpRzIyaVVmUXNYdTJqWVJa?oc=5" target="_blank">Tunisia: President accuses artificial intelligence of ‘conspiring’ against humans</a> <font color="#6f6f6f">Middle East Monitor</font>

MemFactory: Unified Inference & Training Framework for Agent Memory
arXiv:2603.29493v1 Announce Type: new Abstract: Memory-augmented Large Language Models (LLMs) are essential for developing capable, long-term AI agents. Recently, applying Reinforcement Learning (RL) to optimize memory operations, such as extraction, updating, and retrieval, has emerged as a highly promising research direction. However, existing implementations remain highly fragmented and task-specific, lacking a unified infrastructure to streamline the integration, training, and evaluation of these complex pipelines. To address this gap, we present MemFactory, the first unified, highly modular training and inference framework specifically designed for memory-augmented agents. Inspired by the success of unified fine-tuning frameworks like LLaMA-Factory, MemFactory abstracts the memory lif

From Physics to Surrogate Intelligence: A Unified Electro-Thermo-Optimization Framework for TSV Networks
arXiv:2603.29268v1 Announce Type: new Abstract: High-density through-substrate vias (TSVs) enable 2.5D/3D heterogeneous integration but introduce significant signal-integrity and thermal-reliability challenges due to electrical coupling, insertion loss, and self-heating. Conventional full-wave finite-element method (FEM) simulations provide high accuracy but become computationally prohibitive for large design-space exploration. This work presents a scalable electro-thermal modeling and optimization framework that combines physics-informed analytical modeling, graph neural network (GNN) surrogates, and full-wave sign-off validation. A multi-conductor analytical model computes broadband S-parameters and effective anisotropic thermal conductivities of TSV arrays, achieving $5\%-10\%$ relative

Lie Generator Networks for Nonlinear Partial Differential Equations
arXiv:2603.29264v1 Announce Type: new Abstract: Linear dynamical systems are fully characterized by their eigenspectra, accessible directly from the generator of the dynamics. For nonlinear systems governed by partial differential equations, no equivalent theory exists. We introduce Lie Generator Network--Koopman (LGN-KM), a neural operator that lifts nonlinear dynamics into a linear latent space and learns the continuous-time Koopman generator ($L_k$) through a decomposition $L_k = S - D_k$, where $S$ is skew-symmetric representing conservative inter-modal coupling, and $D_k$ is a positive-definite diagonal encoding modal dissipation. This architectural decomposition enforces stability and enables interpretability through direct spectral access to the learned dynamics. On two-dimensional

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!