Datadog bets DIY AI will mean it dodges the SaaSpocalypse
<h4>The theory is that its domain-specific model will beat generalist LLMs on results and economics</h4> <p>Datadog is close to releasing an updated AI model that it thinks will help it avoid the so-called SaaSpocalypse – customers using AI to build their own tools.…</p>
Datadog is close to releasing an updated AI model that it thinks will help it avoid the so-called SaaSpocalypse – customers using AI to build their own tools.
The observability tools vendor already created a model called Toto-Open-Base that the company's explanatory paper says it built with 151 million parameters, trained on more than two trillion time-series data points – apparently the largest pretraining dataset for any open-weights time-series foundation model. All the data used to train the model came from Datadog itself, gathered in the course of operating its SaaSy observability services.
In conversation with The Register, Datadog chief product officer Yanbing Li said the company is reviewing its next model but sees that effort as the means to an end.
"What is the SaaS company's role?" she asked, before answering: "To innovate in their domain."
For Datadog, that means creating a model specific to its domain – observability – rather than relying on a generic LLM.
Li thinks developing models brings two things to Datadog.
One is that AI becomes part of its platform, rather than requiring customers to set a token budget on another service. The other is better agents that detect and predict anomalies more effectively.
She claimed Datadog's site reliability agent can already investigate incidents, provide root cause analysis, and suggest remediation actions.
-
AIOps is so powerful, vendors are building tools to clean up after agents break your infrastructure
-
Cisco looses Splunk to probe and tame its growing agentic menagerie
-
Snowflake buys Observe to make 'Days Since Last Outage' counters obsolete
-
ServiceNow's new AI agents will happily volunteer for your dullest tasks
AI remains a flaky field and agents make mistakes. The Register therefore put it to Li that operators of mission-critical IT must be wary before letting agents suggest changes to their systems, let alone enact those changes without supervision.
She agreed and said for AI systems to win trust, their output must be both explainable and verifiable. Using its own models makes that easier for Datadog, she said. They have also helped the company to create a tool that watches AI platforms while they work and can detect signs they are producing hallucinated output.
"I do not worry about a race to develop models, but applying them," she said, adding that she thinks users will apply Datadog's models because they allow constant monitoring of health – a bit like wearable devices.
"Today, when we see a doctor, it is an expensive hassle, so we only visit when we are ill," she said. Smartwatches packed full of sensors, plus AI to analyze those signals, mean it's now possible to detect and predict illness.
Li thinks Datadog offers a similar change from occasional to constant diagnosis and can dodge the SaaSpocalypse.
"What is vulnerable in this transition is point tools, when customers do not act in your tool," she said. "Those things are more easily disrupted."
She reckons AI has seen Datadog transcend SaaS to become a platform.
Every vendor aspires to that status because it makes it harder for customers to leave. Maybe AI can solve that one day. ®
The Register AI/ML
https://go.theregister.com/feed/www.theregister.com/2026/03/24/datadog_revising_custom_model/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelupdate
Beyond Preset Identities: How Agents Form Stances and Boundaries in Generative Societies
arXiv:2603.23406v2 Announce Type: replace-cross Abstract: While large language models simulate social behaviors, their capacity for stable stance formation and identity negotiation during complex interventions remains unclear. To overcome the limitations of static evaluations, this paper proposes a novel mixed-methods framework combining computational virtual ethnography with quantitative socio-cognitive profiling. By embedding human researchers into generative multiagent communities, controlled discursive interventions are conducted to trace the evolution of collective cognition. To rigorously measure how agents internalize and react to these specific interventions, this paper formalizes three new metrics: Innate Value Bias (IVB), Persuasion Sensitivity, and Trust-Action Decoupling (TAD).

Conversational Successes and Breakdowns in Everyday Smart Glasses Use
arXiv:2602.22340v2 Announce Type: replace Abstract: Non-Display Smart Glasses hold the potential to support everyday activities by combining continuous environmental sensing with voice-only interaction powered by large language models (LLMs). Understanding how conversational successes and breakdowns arise in everyday contexts can better inform the design of future voice-only interfaces. To investigate this, we conducted a month-long collaborative autoethnography (n=2) to identify patterns of successes and breakdowns when using such devices. We then compare these patterns with prior findings on voice-only interactions to highlight the unique affordances and opportunities offered by non-display smart glasses.

From Automation to Augmentation: A Framework for Designing Human-Centric Work Environments in Society 5.0
arXiv:2604.01364v1 Announce Type: cross Abstract: Society 5.0 and Industry 5.0 call for human-centric technology integration, yet the concept lacks an operational definition that can be measured, optimized, or evaluated at the firm level. This paper addresses three gaps. First, existing models of human-AI complementarity treat the augmentation function phi(D) as exogenous -- dependent only on the stock of AI deployed -- ignoring that two firms with identical technology investments achieve radically different augmentation outcomes depending on how the workplace is organized around the human-AI interaction. Second, no multi-dimensional instrument exists linking workplace design choices to augmentation productivity. Third, the Society 5.0 literature proposes human-centricity as a normative as
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Conversational Successes and Breakdowns in Everyday Smart Glasses Use
arXiv:2602.22340v2 Announce Type: replace Abstract: Non-Display Smart Glasses hold the potential to support everyday activities by combining continuous environmental sensing with voice-only interaction powered by large language models (LLMs). Understanding how conversational successes and breakdowns arise in everyday contexts can better inform the design of future voice-only interfaces. To investigate this, we conducted a month-long collaborative autoethnography (n=2) to identify patterns of successes and breakdowns when using such devices. We then compare these patterns with prior findings on voice-only interactions to highlight the unique affordances and opportunities offered by non-display smart glasses.

J-CHAT: Japanese Large-scale Spoken Dialogue Corpus for Spoken Dialogue Language Modeling
arXiv:2407.15828v2 Announce Type: replace-cross Abstract: Spoken dialogue is essential for human-AI interactions, providing expressive capabilities beyond text. Developing effective spoken dialogue systems (SDSs) requires large-scale, high-quality, and diverse spoken dialogue corpora. However, existing datasets are often limited in size, spontaneity, or linguistic coherence. To address these limitations, we introduce J-CHAT, a 76,000-hour open-source Japanese spoken dialogue corpus. Constructed using an automated, language-independent methodology, J-CHAT ensures acoustic cleanliness, diversity, and natural spontaneity. The corpus is built from YouTube and podcast data, with extensive filtering and denoising to enhance quality. Experimental results with generative spoken dialogue language m



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!