Microsoft shivs OpenAI with three new AI models for speech and images
About that partnership... Microsoft on Thursday unveiled public preview versions of three home-baked machine learning models focused on speech recognition, speech synthesis, and image generation.…
Microsoft on Thursday unveiled public preview versions of three home-baked machine learning models focused on speech recognition, speech synthesis, and image generation.
The release makes the Windows biz look more like a direct competitor to OpenAI than an investor – Redmond held an OpenAI stake valued at about $135 billion as of last October.
The models include: MAI-Transcribe-1, a speech recognition model that delivers "enterprise-grade accuracy across 25 languages at approximately 50 percent lower GPU cost than leading alternatives"; MAI-Voice-1, a speech generation model that can supposedly produce 60 seconds of audio in less than a second on a single GPU; and MAI-Image-2, a text-to-image model, to compound the despair of digital artists.
OpenAI just happens to offer its own speech recognition, speech generation, and text-to-image models.
Microsoft's models are available through Foundry (formerly Azure AI Studio), a platform to develop AI agents and applications.
Naomi Moneypenny, who leads the Microsoft Azure AI Foundry Models product team, talked up the model arrivals in a blog post.
"These are the same models already powering our own products such as Copilot, Bing, PowerPoint, and Azure Speech, and now they're available exclusively on Foundry for developers to use," she wrote.
The models look well-suited for common enterprise use cases, such as designing customer support agents that can recognize speech and generate a response. Moneypenny suggests the models would also be useful to provide captioning for large events and meetings, for media subtitling and archiving, for education and training, and for gathering customer and market insights from focus groups, for example.
Microsoft is already consuming its own dog food here – Copilot's Audio Expressions runs on MAI-Voice-1 while Copilot's Voice Mode transcription service uses MAI-Transcribe-1.
Developers can try these two models via Azure Speech.
-
Microsoft veteran says some 'broken by update' PCs were already doomed
-
Even Microsoft knows Copilot shouldn't be trusted with anything important
-
IBM wants Arm software on its mainframes to better support AI
-
Artemis II astronaut: 'I have two Microsoft Outlooks, and neither one of those are working'
When Microsoft announced that it had renegotiated its agreement with OpenAI, the Windows biz indicated that the partnership would continue at least to 2032 – a scenario that assumes no AI market implosion. But it also highlighted areas of competition. "Microsoft can now independently pursue AGI [artificial general intelligence] alone or in partnership with third parties," the company said at the time. That statement on its own frees Microsoft to go its own way on AI under the guise of AGI research.
Microsoft has some incentive to hedge its bets. Its OpenAI ties showed strain back in January when Microsoft investors signaled dissatisfaction with the company's exposure to OpenAI's considerable spending. The AI hype-leader is burning cash and is expected to lose $14 billion this year, according to internal projections published by The Information. An internal effort to streamline its focus on enterprise customers is reportedly underway, and it killed its token-incinerating but not particularly useful video generator, Sora 2, late last month.
Two weeks ago, Microsoft CEO Satya Nadella announced leadership changes affecting the company's Copilot products and superintelligence effort. Jacob Andreou was tapped to lead the company's Copilot experience as EVP across Microsoft consumer and commercial products, reporting directly to Nadella. Copilot now focuses on four areas: Copilot experience, Copilot platform, Microsoft 365 apps, and AI models.
Presumably, Andreou's AI model remit isn't simply checking in with OpenAI to see what models are available. And if Microsoft's model ambitions were obvious enough, Nadella said Mustafa Suleyman will continue to steer Microsoft's AI research – entirely unnecessary if your ambition is to remain dependent on OpenAI. ®
The Register AI/ML
https://go.theregister.com/feed/www.theregister.com/2026/04/02/microsoft_models_homegrown_ai_models/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelversionreview
Seeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?
So Chat Qwen Ai can you beat Meta Ai and Chatgpt? My Analysis as Qwen Great discussion! Both Meta AI and ChatGPT provided solid technical answers. Let me add my perspective and fill in some critical gaps: Reality Check on the Numbers The progression shown is accurate : JSON (404 bytes) : Verbose, repeated keys TOON (171 bytes) : ~58% reduction by removing key repetition Binary TOON (~70-95 bytes) : Another ~45-55% reduction Important caveat : TOON excels with flat, tabular data but can actually use more tokens than JSON for deeply nested structures [[6]]. What ChatGPT Got Right Schema externalization = biggest win (removes field names entirely) Dictionary encoding = huge for repeated strings Varint encoding = efficient for small integers “Protobuf-level” = schema + binary + deterministic p
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Seeing can Chat Qwen Ai beat shrink JSON and TOON based on TOON?
So Chat Qwen Ai can you beat Meta Ai and Chatgpt? My Analysis as Qwen Great discussion! Both Meta AI and ChatGPT provided solid technical answers. Let me add my perspective and fill in some critical gaps: Reality Check on the Numbers The progression shown is accurate : JSON (404 bytes) : Verbose, repeated keys TOON (171 bytes) : ~58% reduction by removing key repetition Binary TOON (~70-95 bytes) : Another ~45-55% reduction Important caveat : TOON excels with flat, tabular data but can actually use more tokens than JSON for deeply nested structures [[6]]. What ChatGPT Got Right Schema externalization = biggest win (removes field names entirely) Dictionary encoding = huge for repeated strings Varint encoding = efficient for small integers “Protobuf-level” = schema + binary + deterministic p





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!