Israel-Built Ethernet Puts Nvidia at the Center of the AI Infrastructure Overhaul

20 November, 2025

It is rapidly closing the gap with InfiniBand, driven by networking chips engineered in Israel. Networking revenue surges as Nvidia writes off the entire China data-center market

Photo above: Nvidia’s Spectrum-X acceleration platform

By Yochai Schweiger

Nvidia once again demonstrated overnight why it sits at the center of the global AI infrastructure race. The company reported quarterly revenue of $57 billion, up 62% year over year, with a 22% sequential jump. The primary engine remains its data-center division, which hit a new record of $51 billion, up 66% from a year earlier. Nvidia also issued an aggressive forecast for Q4: $65 billion in revenue, implying roughly 14% sequential growth, powered by accelerating adoption of the Blackwell architecture.

Gross margin reached 73.6% non-GAAP, supported by a favorable data-center mix and improvements in cycle time and cost structure. Meanwhile, inventory rose 32% and supply commitments jumped 63% quarter over quarter — a clear signal that the company is “loading up” for further growth.
“The clouds are sold out, and our GPU installed base – across Ampere, Hopper, and Blackwell – is fully utilized,” CFO Colette Kress said. The implication is clear: hyperscale clouds have virtually no free GPU capacity left. Kress added that Nvidia now has “visibility to a half a trillion dollars in Blackwell and Rubin revenue through the end of calendar 2026.”

The Connectivity Breakthrough: Israeli Ethernet Pushes Nvidia Forward

One of the most impactful revelations in the report was Nvidia’s networking business — much of it rooted in technology developed in Israel (the legacy Mellanox team). The division posted $8.2 billion in revenue, a staggering 162% year-over-year increase. Kress noted: “Networking more than doubled, with growth in NVLink and Spectrum-X Ethernet, alongside double-digit growth in InfiniBand.”
CEO Jensen Huang put it even more bluntly: “We are winning in data-center networking. The majority of large-scale AI deployments now include our switches, and Ethernet GPU attach rates are now roughly on par with InfiniBand.”

Behind this comment lies a genuine structural shift in the market — driven by the maturation of Spectrum-X, Nvidia’s AI-optimized Ethernet platform developed in its Israeli R&D hub. Unlike traditional Ethernet, which struggled under AI-scale loads, Spectrum-X delivers AI-grade performance, capable of handling massive throughput, synchronization, and collective operations at gigawatt scale. In other words, the “shift” Huang refers to was not caused by a change in customer behavior — but by Nvidia’s Ethernet finally becoming powerful enough.

Spectrum-X Becomes a Generic Infrastructure Layer

The result is profound: in some of the world’s largest AI projects, the number of GPUs connected via Spectrum-X is now approaching the number connected via InfiniBand — something unthinkable just two years ago.
For Nvidia, this is a strategic breakthrough. It allows the company to penetrate the Ethernet market long dominated by Broadcom and Arista, and prevents hyperscale customers from “escaping” to third-party Ethernet vendors simply because they preferred not to adopt InfiniBand. Nvidia is now pulling the entire Ethernet segment into its own ecosystem — which explains why networking revenue is growing far faster than the rest of the company.

Huang noted that cloud giants are already building gigawatt-scale AI factories on Spectrum-X:
“Meta, Microsoft, Oracle, and xAI are building gigawatt AI factories with Spectrum-X switches.”
With this, Spectrum-X is becoming part of the standard data-center fabric. According to Huang, Nvidia is now the only company with scale-up, scale-out, and scale-across platforms — NVLink inside the server, InfiniBand between servers, and Spectrum-X for hyperscale deployments. Competitors like Broadcom and Arista operate almost exclusively at the switching layer; Nvidia now controls the entire network stack from node to AI factory.

China Zeroed Out: Geopolitics Erase a Multi-Billion-Dollar Market

On the other side of the ledger, China is collapsing as a data-center market for Nvidia.
Kress stated: “Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China,” adding that for next quarter, “we are not assuming any data-center compute revenue from China.”
Huang echoed the sentiment, saying the company is “disappointed in the current state” that prevents it from shipping more competitive products to China, but emphasized that Nvidia remains committed to engagement with both U.S. and Chinese governments, insisting that America must remain “the platform of choice for every commercial business – including those in China.”

Rubin Approaches: Silicon Already in Nvidia’s Hands

Looking ahead, attention is shifting to Rubin, Nvidia’s next-generation AI platform.
Huang provided an update: “We have received silicon back from our supply chain partners, and our teams across the world are executing the bring-up beautifully.”
Rubin is Nvidia’s third-generation rack-scale system — a full-cabinet architecture — redefining manufacturability while remaining compatible with Grace-Blackwell and existing cloud and data-center infrastructure. Huang promised “an X-factor improvement in performance relative to Blackwell,” while maintaining full CUDA ecosystem compatibility.
Customers, he said, will be able to scale training performance without rebuilding their entire infrastructure from scratch.

Is This an AI Bubble? Huang: “We see something very different.”

Hovering above all these numbers is the question dominating the market: is this an AI bubble?
Huang rejected the premise: “From our vantage point, we see something very different.”
He pointed to three simultaneous platform shifts — from CPU to accelerated GPU computing, from classical ML to generative AI, and the rise of agentic AI — which collectively drive multi-year infrastructure investment. “Nvidia is the singular architecture that enables all three transitions,” he said.

The report strengthens that narrative. Nvidia cites a project pipeline involving around 5 million GPUs for AI factories, claims visibility to half a trillion dollars in Blackwell and Rubin revenue through 2026, and is preparing its supply chain — from the first U.S.-made Blackwell wafers with TSMC to collaborations with Foxconn, Wistron, and Amkor — for years of excess demand.
And as Kress put it, “the clouds are sold out” down to the last token. The real question is no longer whether Nvidia can realize this vision — but how long it can stretch demand before supply finally catches up.

Share via Whatsapp

Posted in: AI , ElectroOptics , News

Posted in tags: Ethernet , Infiniband , Nvidia