Nvidia’s $2 billion Marvell bet is not an investment. It is a toll booth.
Nvidia has invested $2 billion in Marvell Technology and folded the chipmaker into its NVLink Fusion ecosystem, creating a partnership that covers custom AI accelerators, silicon photonics, and 5G/6G infrastructure. The deal ensures that every custom chip Marvell designs for hyperscalers like Amazon, Google, and Microsoft still generates Nvidia revenue through mandatory platform components, turning [ ] This story continues at The Next Web
Nvidia has invested $2 billion in Marvell Technology and folded the chipmaker into its NVLink Fusion ecosystem, creating a partnership that covers custom AI accelerators, silicon photonics, and 5G/6G infrastructure. The deal ensures that every custom chip Marvell designs for hyperscalers like Amazon, Google, and Microsoft still generates Nvidia revenue through mandatory platform components, turning what looked like a competitive threat into an ecosystem tax.
Nvidia announced on Monday that it has invested $2 billion in Marvell Technology and entered a strategic partnership centred on NVLink Fusion, the rack-scale platform that allows third-party silicon to plug directly into Nvidia’s proprietary interconnect fabric. Marvell’s stock surged nearly 13 per cent on the news. Nvidia’s rose 5.6 per cent. The market read it as a deal. The more accurate reading is that it is infrastructure policy, written in silicon.
The partnership has Marvell supplying custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia provides everything else: Vera CPUs, ConnectX network interface cards, BlueField data processing units, NVLink interconnect, and Spectrum-X switches.
The two companies will also collaborate on silicon photonics, the technology that uses light instead of copper to move data between chips at the speeds that next-generation AI clusters demand. Jensen Huang framed it in characteristically expansive terms. “The inference inflection has arrived,” the Nvidia chief executive said. “Token generation demand is surging, and the world is racing to build AI factories.”
The strategic subtlety sits in the architecture of NVLink Fusion itself. Every NVLink Fusion platform must include at least one Nvidia product, whether a CPU, GPU, or switch. Nvidia also controls which partners receive NVLink IP licences. This means that the custom AI accelerators Marvell designs for hyperscalers, the very chips these customers commission specifically to reduce their dependence on Nvidia GPUs, will still generate Nvidia revenue on every rack deployed. It is, as Tom’s Hardware put it, a tax on custom ASICs.
The deal deepens a pattern that has become unmistakable. Nvidia has made a series of $2 billion investments in recent months, including stakes in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum. Each targets a different layer of the AI infrastructure stack that is being built at unprecedented speed: cloud providers, chip design tools, optical networking components, and now custom silicon. The common thread is that each investment makes the recipient more dependent on Nvidia’s platform while Nvidia gains both financial exposure to and architectural influence over potential competitors.
Marvell is a particularly interesting target because its fastest-growing business is designing the custom AI accelerators that hyperscalers use to displace Nvidia GPUs. The company’s custom AI XPU business generated $1.5 billion in fiscal 2026 revenue and is expected to double by fiscal 2028. Marvell currently has 18 active custom silicon projects, including 12 devices for Amazon, Google, Microsoft, and Meta, and six for emerging AI customers.
Amazon’s Trainium chips, Microsoft’s Maia accelerators, and Google’s TPUs all flow through Marvell’s design capabilities. By investing $2 billion and pulling Marvell into NVLink Fusion, Nvidia has effectively ensured that the company building its competitors’ weapons is also paying Nvidia for the ammunition.
NVLink Fusion’s partner roster has expanded rapidly since its debut at Computex. Samsung Foundry joined in October to offer manufacturing support on its 3nm and 2nm nodes. Arm entered in November, enabling its licensees to build CPUs with native NVLink connectivity. SiFive joined in January, bringing RISC-V into the ecosystem. Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence were among the original partners.
The breadth of the list is the point: NVLink Fusion is becoming the default interconnect standard for custom AI silicon, not because it is open, but because Nvidia’s software ecosystem, particularly CUDA, makes it the path of least resistance for customers who need their hardware to work immediately.
The open alternative, the Ultra Accelerator Link consortium backed by AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, is designed to break exactly this kind of lock-in. But UALink faces what analysts describe as a crisis of the commons: its members have competing priorities, its 128G specification launch trails the pace of accelerator deployment, and several of its key members now have Nvidia money on their balance sheets. Nvidia’s financial stakes in companies nominally committed to an open standard raise legitimate questions about whether that standard can develop at the speed needed to offer a genuine alternative.
For Marvell’s chief executive Matt Murphy, the deal addresses a practical constraint. “By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics, and custom silicon to Nvidia’s expanding AI ecosystem through NVLink Fusion,” Murphy said, “we are enabling customers to build scalable, efficient AI infrastructure.”
The translation: Marvell’s hyperscaler customers want custom chips that work seamlessly with the Nvidia infrastructure already deployed in their data centres, and NVLink Fusion is how that happens.
The silicon photonics component may prove the most consequential element of the partnership in the medium term. As AI clusters scale to hundreds of thousands of GPUs, the copper interconnects that have served the industry for decades are approaching fundamental bandwidth and energy limits. Optical interconnects can move data faster and more efficiently, but the technology remains expensive and difficult to manufacture at scale. Nvidia and Marvell collaborating on silicon photonics positions both companies at the centre of what could become the next critical bottleneck in AI infrastructure, after chips and after power.
The 5G and 6G dimensions of the partnership, encompassing what Nvidia calls AI-RAN infrastructure, signal an ambition that extends beyond the data centre entirely. If wireless networks increasingly rely on AI for signal processing and resource allocation, the base station becomes another compute node in the Nvidia ecosystem, running on Nvidia platforms with Marvell connectivity. It is the kind of horizontal expansion that turns a chip company into an infrastructure company.
Nvidia still commands roughly 90 per cent of the data centre GPU and AI accelerator market. The semiconductor industry generated $791.7 billion in sales in 2025 and is forecast to grow another 26 per cent in 2026. Against that backdrop, the commercial AI market is accelerating faster than anyone projected, and the companies racing to build it need hardware that works now, not hardware that might work when an open standard catches up. That urgency is Nvidia’s greatest asset and NVLink Fusion’s most effective sales pitch.
The $2 billion is a rounding error on Nvidia’s balance sheet. What it buys is something no amount of R&D spending can replicate: the architectural certainty that even the chips designed to replace Nvidia will be built inside an Nvidia-controlled ecosystem. It is not a partnership in any conventional sense. It is a toll booth on the only road that leads to the fastest-growing market in technology.
The Next Web Neural
https://thenextweb.com/news/nvidia-marvell-nvlink-fusion-ecosystem-lock-inSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
platforminvestmentrevenue
Why I Built My Own CMS (Again) — This Time with Laravel + Filament
Most developers don’t wake up thinking: “You know what the world needs? Another CMS.” And yet… here we are. The Problem with Existing CMS (From a Developer’s POV) I’ve worked with a lot of CMS platforms over the years — from WordPress to headless setups. They all work… until you try to do something slightly different. That’s when things get messy: You fight plugins instead of building features You bend your architecture around the CMS You inherit years of legacy decisions you didn’t sign up for At some point, I realized: I wasn’t building products — I was working around the CMS. The “What If” Question So I started asking: What if a CMS was built the way we build modern Laravel apps today? Not constrained by legacy. Not pretending to be everything for everyone. Just: Clean architecture Deve

How I Built a Desktop Trading Journal with Electron, React, and SQLite
Last week I shipped a desktop app called Aurafy. It's a trading journal for futures traders that runs entirely locally. No cloud, no accounts, no subscription. I wanted to share the technical decisions behind it because I think the "local first" approach is underrated for tools that handle sensitive financial data. The Stack The app is built as a monorepo with three pieces: Server: Express.js + better-sqlite3. The server runs inside the Electron process (no child process spawn, which cuts startup time to under 2 seconds). SQLite with WAL mode handles all persistence. Every write uses synchronous = FULL because losing trade data is unacceptable. Client: React + Vite + Tailwind CSS + Recharts. Standard SPA that talks to the Express server over localhost. TanStack Query handles all data fetch
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Market News

Exclusive | OpenAI Buys Tech-Industry Talk Show TBPN - WSJ
Exclusive | OpenAI Buys Tech-Industry Talk Show TBPN WSJ OpenAI Buys Streaming Show ‘TBPN,’ Aiming to Change Narrative on A.I. The New York Times 3 reasons OpenAI buying daily tech show TBPN for hundreds of millions isn’t totally crazy Fortune



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!