Fiddler and Domino Integration: Streamline MLOps and LLMOps to Accelerate the Production of AI Applications
The Fiddler and Domino integration helps companies accelerate the production of AI solutions and streamline their end-to-end MLOps and LLMOps observability workflows.
We’re excited to announce the Fiddler and Domino partnership! Together, we’re helping companies accelerate the production of AI solutions and streamline their end-to-end MLOps and LLMOps observability.
Complete observability for your Domino models with Fiddler
The Domino Platform and the Fiddler AI Observability Platform allow your team to streamline ML and LLMOps workflows. Fiddler creates a continuous feedback loop from pre-production validation to post-production monitoring to ensure your ML models and large language models (LLMs) applications are optimized, high-performing, and safe.
MLOps Observability
Data scientists and AI practitioners in MLOps can explore data and train models in Domino’s Platform. Using Fiddler’s integration with Domino’s platform they can use the Fiddler AI’s Observability platform to validate the models before launching them into production. Fiddler monitors production models for model drift, data drift, performance, data integrity, and traffic behind the scenes, and alerts ML teams as soon as high-priority models’ performance dips.
Fiddler goes beyond measuring model metrics. It arms ML teams with a 360° view of their models using rich diagnostics and explainable AI. Contextual model insights connect model performance metrics to model issues and anomalies, creating a feedback loop in the MLOps workflow between production and pre-production. Fiddler helps ML teams pinpoint areas of model improvement. They can then go back to earlier stages of the MLOps workflow in Domino, to explore and gather new data for model retraining.
ML teams have a seamless experience in their end-to-end MLOps workflow using Fiddler and Domino
LLMOps Observability
Data and software engineers and AI practitioners in LLMOps can evaluate the robustness, safety, and correctness of LLM applications in pre-production using Fiddler Auditor — the open-source LLM robustness library. Fiddler Auditor is available on GitHub and on Domino AI Hub and Domino users can red-team and monitor LLMs without leaving their environment.
Once LLM applications are in production, users can monitor them for correctness, safety, and privacy metrics. LLMOps teams can also perform root cause analysis using a 3D UMAP to pinpoint problematic prompts and responses to understand how they can improve their applications.
Get started on MLOps with Fiddler in three steps
Let’s walk through how you can start monitoring Domino ML models in Fiddler. Install and initiate the Fiddler client to validate and monitor ML models built on Domino’s Platform in minutes by following the steps below or as described in our documentation:
1. Upload a baseline dataset
Retrieve your pre-processed training data from Domino’s TrainingSets. Then load it into a dataframe and pass it to Fiddler:
from domino.training_sets import TrainingSetClient, model import fiddler as fdlfrom domino.training_sets import TrainingSetClient, model import fiddler as fdl#get your training data from domino TRAINING_SET = "Your Training Dataset" training_set_by_num = TrainingSetClient.get_training_set_version( training_set_name=TRAINING_SET, number=2)
baseline_dataset = tsv_by_num.load_raw_pandas()
#initiate fiddler client fiddler_client = fdl.FiddlerApi(url= "Your Fiddler URL", org_id="Your Fiddler Org ID", auth_token="Your Fiddler Auth Token")
dataset_info = fdl.DatasetInfo.from_dataframe( baseline_dataset, max_inferred_cardinality=100)
fiddler_client.upload_dataset( project_id='Your Project Name', dataset_id=TRAINING_SET, dataset={'baseline': baseline_dataset}, info=dataset_info)`
2. Add metadata about the model
Share model metadata: Use Domino Data Lab’s ML Flow implementation to query the model registry and get the model signature which describes the inputs and outputs as a dictionary:
import mlflow from mlflow.tracking import MlflowClient
#initiate MLFlow Client client = MlflowClient() model_version_info = client.get_model_version(model_name, model_version)
#Get the model URI model_uri = client.get_model_version_download_uri(model_name, model_version_info)
#Get the Model Signature mlflow_model_info = mlflow.models.get_model_info(model_uri) model_inputs_schema = model_info.signature.inputs.to_dict() model_inputs = [ sub['name'] for sub in model_inputs_schema ]`
Now you can share the model signature with Fiddler as part of the Fiddler ModelInfo object:
features = model_inputs model_task = fdl.ModelTask.BINARY_CLASSIFICATIONfeatures = model_inputs model_task = fdl.ModelTask.BINARY_CLASSIFICATIONmodel_info = fdl.ModelInfo.from_dataset_info( dataset_info = client.get_dataset_info(YOUR_PROJECT,YOUR_DATASET), target = 'TARGET COLUMN', dataset_id= TRAINING_SET, model_task=model_task, features=features, outputs=['output_column'])
#upload model info to Fiddler client.add_model( project_id=PROJECT_ID, dataset_id=DATASET_ID, model_id='Your Model Name', model_info=model_info, )`
3. Publish events
You can query the data sources in your Domino environment to pull the model inferences and put the new inferences into a data frame to publish to fiddler:
from domino.data_sources import DataSourceClient
#instantiate a client and fetch the datasource instance redshift = DataSourceClient().get_datasource("YOUR-DATA-SOURCE")
query = """ SELECT * FROM INFERENCE_TABLE """*
#res is a simple wrapper of the query result res = redshift.query(query)
#to_pandas() loads the result into a pandas dataframe df = res.to_pandas()
fiddler_client.publish_events_batch( project_id='Your Porject Name', model_id='Your Model Name', batch_source=df)`
That’s it! Now you can jump into your Fiddler environment to start observing the model data we just published. Fiddler will be able to alert you whenever there are issues with your model.
Fiddler Alert context for model accuracy declining
We’re here to help. Contact our AI experts to learn how enterprises are accelerating AI solutions with streamlined end-to-end MLOps and LLMOps using Domino and Fiddler together.
Subscribe to our newsletter
Monthly curated AI content, Fiddler updates, and more.
Fiddler AI Blog
https://www.fiddler.ai/blog/fiddler-and-domino-integration-accelerating-ml-and-llm-applications-to-productionSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productapplicationintegration
Monitor Your AI Agent's DeFi Empire: Admin Dashboard Deep Dive
Your AI agent is making thousands of dollars in DeFi trades, but you're flying blind. Without proper monitoring, you won't know if your automated trading strategy is generating alpha or bleeding funds until it's too late. Even worse, you can't track which protocols are performing, monitor health factors before liquidation, or quickly respond when positions go sideways. Why DeFi Monitoring Matters for Autonomous Agents AI agents don't sleep, don't take coffee breaks, and can execute complex DeFi strategies 24/7. But this advantage becomes a liability without proper oversight. Your agent might be: Accumulating leveraged positions that approach liquidation thresholds Interacting with protocols you didn't explicitly authorize Burning gas on failed transactions due to slippage or market conditi

Market Hours APIs Are Not Enough for Autonomous Agents
Every developer building a trading agent checks market hours. Almost none check them correctly. The standard approach: call an API, parse the response, check if a flag says is_open: true . Then proceed. This works when a human is in the loop. It fails silently when an autonomous agent is running at 3am. What the standard approach misses A market data API tells you what the market data provider believes is true. It doesn't prove it. Four things can go wrong: 1. Stale data. A cached response from 45 minutes ago says the market is open. The market closed 40 minutes ago. The cache TTL was set to 1 hour. Your agent executes into a closed book. 2. No authentication on the market state. Anyone — including a compromised service or a man-in-the-middle — can return is_open: true . The response has n

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
The AI landscape is experiencing unprecedented growth and transformation. This post delves into the key developments shaping the future of artificial intelligence, from massive industry investments to critical safety considerations and integration into core development processes. Key Areas Explored: Record-Breaking Investments: Major tech firms are committing billions to AI infrastructure, signaling a significant acceleration in the field. AI in Software Development: We examine how companies are leveraging AI for code generation and the implications for engineering workflows. Safety and Responsibility: The increasing focus on ethical AI development and protecting vulnerable users, particularly minors. Market Dynamics: How AI is influencing stock performance, cloud computing strategies, and
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Market Hours APIs Are Not Enough for Autonomous Agents
Every developer building a trading agent checks market hours. Almost none check them correctly. The standard approach: call an API, parse the response, check if a flag says is_open: true . Then proceed. This works when a human is in the loop. It fails silently when an autonomous agent is running at 3am. What the standard approach misses A market data API tells you what the market data provider believes is true. It doesn't prove it. Four things can go wrong: 1. Stale data. A cached response from 45 minutes ago says the market is open. The market closed 40 minutes ago. The cache TTL was set to 1 hour. Your agent executes into a closed book. 2. No authentication on the market state. Anyone — including a compromised service or a man-in-the-middle — can return is_open: true . The response has n

Monitor Your AI Agent's DeFi Empire: Admin Dashboard Deep Dive
Your AI agent is making thousands of dollars in DeFi trades, but you're flying blind. Without proper monitoring, you won't know if your automated trading strategy is generating alpha or bleeding funds until it's too late. Even worse, you can't track which protocols are performing, monitor health factors before liquidation, or quickly respond when positions go sideways. Why DeFi Monitoring Matters for Autonomous Agents AI agents don't sleep, don't take coffee breaks, and can execute complex DeFi strategies 24/7. But this advantage becomes a liability without proper oversight. Your agent might be: Accumulating leveraged positions that approach liquidation thresholds Interacting with protocols you didn't explicitly authorize Burning gas on failed transactions due to slippage or market conditi

Beyond Simple OCR: Building an Autonomous VLM Auditor for E-Commerce Scale
In the world of global e-commerce, “dirty data” is a multi-billion dollar problem. Product dimensions (Length, Width, Height) are often inconsistent across databases, leading to shipping errors, warehouse mismatches, and customer returns. Traditional OCR struggles with complex specification badges, and manual auditing is impossible at the scale of millions of ASINs. Enter the Autonomous VLM Auditor — a high-efficiency pipeline utilizing the newly released Qwen2.5-VL to extract, verify, and self-correct product metadata. The Novelty: What Makes This Different? Most Vision-Language Model (VLM) implementations focus on captioning or chat. This project introduces three specific technical novelties: 1. The “Big Brain, Small Footprint” Strategy To process over 6,000 images at scale, we utilized



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!