NVIDIA Unveils an Open and Transparent Autonomous Driving Model

At this week’s NeurIPS conference, NVIDIA launched DRIVE Alpamayo-R1, a new autonomous-driving model described as the first industry-scale VLA (Vision-Language-Action) system to be released in open source. VLA refers to a model architecture that integrates visual perception, scene understanding, causal reasoning, and action planning into a single continuous framework.

The announcement marks a significant shift for the company. While NVIDIA has spent recent years building its AV efforts around dedicated hardware platforms such as DRIVE Orin and DRIVE Thor, it had never before opened a core driving module to the broader research community. For the autonomous-driving world — where closed, proprietary decision-making systems dominate — this is a notable milestone.

A Unified Model With Causal Reasoning at Its Core
Alpamayo-R1 is an end-to-end autonomous-driving model that simultaneously performs computer vision, scene comprehension, causal reasoning, and trajectory planning. Unlike traditional AV architectures that separate perception, prediction, and planning, AR1 uses a unified VLA structure that stitches the layers together into a single, continuous decision pipeline.

At the heart of the model lies causal reasoning — the ability to break down complex driving scenarios, evaluate multiple “thought paths,” and select a final trajectory based on interpretable internal logic.

According to NVIDIA, AR1 was trained on a blend of real-world data, simulation, and open datasets, including a newly introduced Chain-of-Causation dataset in which every action is annotated with a structured explanation for why it was taken. In the post-training phase, researchers used reinforcement learning, yielding a measurable improvement in reasoning quality compared with the pretrained model.

The model will be released for non-commercial use on GitHub and Hugging Face. NVIDIA will also publish companion tools, including AlpaSim, a testing framework, and an accompanying open dataset for AV research.

Open vs. Closed Models

Today’s autonomous-driving systems largely fall into two categories. Tesla uses an end-to-end Vision → Control approach, in which a single model processes camera input and outputs steering and braking commands. Tesla’s model is not open, does not provide explicit reasoning, and is not structured around a clear division between reasoning and action.

Mobileye, by contrast, maintains a more “classic” perception-prediction-planning stack built on semantic maps, deterministic algorithms, and safety rules. But Mobileye’s models are also closed systems that offer no external visibility into their decision-making logic.

This is where AR1 stands apart: it provides explicit, interpretable reasoning traces explaining why a particular trajectory was chosen — something rarely seen in AV systems, and never before at industrial scale.

The significance of making such a model open extends far beyond academia. Commercial AV stacks are black boxes, which makes regulatory evaluation, cross-model comparison, and stress-testing in rare scenarios difficult. By opening a reasoning-based driving model, NVIDIA enables transparent, reproducible experimentation — much like what Llama and Mistral have done for language models.

A Shift Toward a New Paradigm
AR1 signals a broader shift: autonomous driving is evolving toward a domain where general-purpose intelligence models play a central role, replacing rigid, hand-engineered pipelines. While there is no evidence yet that a unified VLA model can replace the entire AV stack, this is the clearest move to date toward what could be called a “physics of behavior” — an effort to understand not only what the car sees, but why it should act in a certain way.

The announcement also aligns with NVIDIA’s hardware strategy. As models become larger, more compute-intensive, and increasingly reliant on high-fidelity simulation, the case for using NVIDIA’s platforms only strengthens.

Alpamayo-R1 is not a full autonomous-driving system, but it is the first time that the cognitive heart of such a system — its decision-making logic — is being opened to researchers, OEMs, and startups. In a field long defined by closed-door development, that alone is a meaningful breakthrough.

Israel-Built Ethernet Puts Nvidia at the Center of the AI Infrastructure Overhaul

Photo above: Nvidia’s Spectrum-X acceleration platform

By Yochai Schweiger

Nvidia once again demonstrated overnight why it sits at the center of the global AI infrastructure race. The company reported quarterly revenue of $57 billion, up 62% year over year, with a 22% sequential jump. The primary engine remains its data-center division, which hit a new record of $51 billion, up 66% from a year earlier. Nvidia also issued an aggressive forecast for Q4: $65 billion in revenue, implying roughly 14% sequential growth, powered by accelerating adoption of the Blackwell architecture.

Gross margin reached 73.6% non-GAAP, supported by a favorable data-center mix and improvements in cycle time and cost structure. Meanwhile, inventory rose 32% and supply commitments jumped 63% quarter over quarter — a clear signal that the company is “loading up” for further growth.
“The clouds are sold out, and our GPU installed base – across Ampere, Hopper, and Blackwell – is fully utilized,” CFO Colette Kress said. The implication is clear: hyperscale clouds have virtually no free GPU capacity left. Kress added that Nvidia now has “visibility to a half a trillion dollars in Blackwell and Rubin revenue through the end of calendar 2026.”

The Connectivity Breakthrough: Israeli Ethernet Pushes Nvidia Forward

One of the most impactful revelations in the report was Nvidia’s networking business — much of it rooted in technology developed in Israel (the legacy Mellanox team). The division posted $8.2 billion in revenue, a staggering 162% year-over-year increase. Kress noted: “Networking more than doubled, with growth in NVLink and Spectrum-X Ethernet, alongside double-digit growth in InfiniBand.”
CEO Jensen Huang put it even more bluntly: “We are winning in data-center networking. The majority of large-scale AI deployments now include our switches, and Ethernet GPU attach rates are now roughly on par with InfiniBand.”

Behind this comment lies a genuine structural shift in the market — driven by the maturation of Spectrum-X, Nvidia’s AI-optimized Ethernet platform developed in its Israeli R&D hub. Unlike traditional Ethernet, which struggled under AI-scale loads, Spectrum-X delivers AI-grade performance, capable of handling massive throughput, synchronization, and collective operations at gigawatt scale. In other words, the “shift” Huang refers to was not caused by a change in customer behavior — but by Nvidia’s Ethernet finally becoming powerful enough.

Spectrum-X Becomes a Generic Infrastructure Layer

The result is profound: in some of the world’s largest AI projects, the number of GPUs connected via Spectrum-X is now approaching the number connected via InfiniBand — something unthinkable just two years ago.
For Nvidia, this is a strategic breakthrough. It allows the company to penetrate the Ethernet market long dominated by Broadcom and Arista, and prevents hyperscale customers from “escaping” to third-party Ethernet vendors simply because they preferred not to adopt InfiniBand. Nvidia is now pulling the entire Ethernet segment into its own ecosystem — which explains why networking revenue is growing far faster than the rest of the company.

Huang noted that cloud giants are already building gigawatt-scale AI factories on Spectrum-X:
“Meta, Microsoft, Oracle, and xAI are building gigawatt AI factories with Spectrum-X switches.”
With this, Spectrum-X is becoming part of the standard data-center fabric. According to Huang, Nvidia is now the only company with scale-up, scale-out, and scale-across platforms — NVLink inside the server, InfiniBand between servers, and Spectrum-X for hyperscale deployments. Competitors like Broadcom and Arista operate almost exclusively at the switching layer; Nvidia now controls the entire network stack from node to AI factory.

China Zeroed Out: Geopolitics Erase a Multi-Billion-Dollar Market

On the other side of the ledger, China is collapsing as a data-center market for Nvidia.
Kress stated: “Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China,” adding that for next quarter, “we are not assuming any data-center compute revenue from China.”
Huang echoed the sentiment, saying the company is “disappointed in the current state” that prevents it from shipping more competitive products to China, but emphasized that Nvidia remains committed to engagement with both U.S. and Chinese governments, insisting that America must remain “the platform of choice for every commercial business – including those in China.”

Rubin Approaches: Silicon Already in Nvidia’s Hands

Looking ahead, attention is shifting to Rubin, Nvidia’s next-generation AI platform.
Huang provided an update: “We have received silicon back from our supply chain partners, and our teams across the world are executing the bring-up beautifully.”
Rubin is Nvidia’s third-generation rack-scale system — a full-cabinet architecture — redefining manufacturability while remaining compatible with Grace-Blackwell and existing cloud and data-center infrastructure. Huang promised “an X-factor improvement in performance relative to Blackwell,” while maintaining full CUDA ecosystem compatibility.
Customers, he said, will be able to scale training performance without rebuilding their entire infrastructure from scratch.

Is This an AI Bubble? Huang: “We see something very different.”

Hovering above all these numbers is the question dominating the market: is this an AI bubble?
Huang rejected the premise: “From our vantage point, we see something very different.”
He pointed to three simultaneous platform shifts — from CPU to accelerated GPU computing, from classical ML to generative AI, and the rise of agentic AI — which collectively drive multi-year infrastructure investment. “Nvidia is the singular architecture that enables all three transitions,” he said.

The report strengthens that narrative. Nvidia cites a project pipeline involving around 5 million GPUs for AI factories, claims visibility to half a trillion dollars in Blackwell and Rubin revenue through 2026, and is preparing its supply chain — from the first U.S.-made Blackwell wafers with TSMC to collaborations with Foxconn, Wistron, and Amkor — for years of excess demand.
And as Kress put it, “the clouds are sold out” down to the last token. The real question is no longer whether Nvidia can realize this vision — but how long it can stretch demand before supply finally catches up.

Nvidia’s Israeli Networking Division Doubles Sales

By Yohai Schwiger

Nvidia’s Networking Division, headquartered in Israel, continues to outpace the company’s overall growth with record-breaking momentum. In the second quarter, the division generated $7.3 billion in revenue—a 98% jump from last year and 46% growth compared to the previous quarter. The surge was driven by strong demand for Spectrum-X Ethernet, InfiniBand, and NVLink—critical interconnect technologies that link processors inside servers and connect data centers worldwide.

According to Nvidia’s latest quarterly report, Spectrum-X alone is now generating more than $10 billion in annualized revenue, with double-digit growth in Q2. InfiniBand also delivered standout results, with revenues nearly doubling in a single quarter thanks to adoption of the new XDR generation, which doubles bandwidth compared to its predecessor—a key enabler for running today’s massive AI models.

At the same time, demand for NVLink—used to connect GPUs and other components within servers—is rising rapidly, fueled by strong sales of Nvidia’s GB200 and GB300 systems. On the earnings call, executives stressed that “the growing demands of AI compute clusters require ultra-efficient, low-latency networking.” The message underscored the strategic importance of the Israeli division to Nvidia’s broader growth. Companywide, Nvidia’s total Q2 revenue climbed 56% year-over-year to a record $46.7 billion.

Red Tape Clouds China Sales

Despite receiving U.S. licenses in July to sell H20 chips to certain Chinese customers, Nvidia disclosed that no such sales have taken place. The holdup stems from regulatory limbo: Washington has signaled it expects 15% of revenues from these deals, but has yet to issue a formal rule, leaving Nvidia unable to act on the licenses it already holds.

Meanwhile, Nvidia is developing a Blackwell-based chip specifically for China. “Our products are designed and sold for beneficial commercial use, and any licensed sales will support the U.S. economy and its technology leadership,” the company said. Given the uncertainty, Nvidia excluded potential China H20 sales from its Q3 forecast. If the issue is resolved, it expects $2–5 billion in H20 sales next quarter.

Rubin Platform Enters Production

Nvidia also confirmed that its next-generation AI platform, Rubin, has entered production. For the first time, the platform integrates Nvidia’s own CPU—codenamed Vera—alongside Rubin GPUs, eliminating reliance on Intel or AMD processors. Together, Vera and Rubin will form the computational core of the system.

Rubin will also feature several Israeli-developed networking components: the CX9 SuperNIC, a next-gen NVLink switch, upgraded Spectrum-X switches, and a silicon-photonics processor enabling ultra-fast optical communication between servers and chips. Rubin is slated for mass production in 2026, keeping Nvidia on its one-year cadence of platform rollouts and signaling that innovation will extend well beyond GPUs to networking, compute, and software.

Sovereign AI Demand Surges

A fast-emerging market segment—dubbed “Sovereign AI”—is becoming a growth engine. These national initiatives aim to build independent AI infrastructure using local compute, data, and talent. Nvidia estimates revenue from this sector will exceed $20 billion in 2025, more than double 2024 levels.

The projects are vast in scale, with Nvidia often at the center. In Europe, the European Commission announced a €20 billion plan to build 20 “AI factories” across France, Germany, Italy, and Spain—including five giga-factories that will boost the continent’s compute capacity tenfold. In the UK, the new Isambard-AI supercomputer—powered by Nvidia—delivers 21 exaflops of performance to accelerate research in fields ranging from drug discovery to climate modeling.

Nvidia’s Networking Revenue Soars to $5 Billion

Nvidia reported record quarterly revenue for the seventh consecutive time last night. As the company’s data center operations continue to expand, so does the demand for its networking products — developed in Nvidia’s Israeli R&D center and based on technology from Mellanox, acquired in 2020. In the first quarter, networking revenue reached $5 billion, marking a 63% increase from the previous quarter — significantly outpacing the company’s overall revenue growth of 12%, which totaled $44.1 billion.

Nvidia’s networking division consists of two main product lines: NVLink connectivity components, which link GPUs within the same chip package or rack, and Ethernet and InfiniBand-based interconnects that connect servers across the data center. Both product families are currently developed in Israel. Revenue from fifth-generation NVLink components — part of the new Blackwell platform — totaled $1 billion in Q1, reflecting surging demand for the Blackwell chip, which accounted for 70% of Nvidia’s data center sales last quarter.

Spectrum-X Ethernet switches, originally developed under Mellanox, are also seeing growing demand amid the global expansion of data center infrastructure. According to Nvidia, switch sales are projected to reach $8 billion this year. “Adoption among cloud and internet companies like Microsoft, CoreWeave, Oracle, and xAI is strong. In the past quarter, Google and Meta also joined the list of customers,” said Nvidia founder and CEO Jensen Huang during the earnings call.

Huang described an ongoing arms race among the tech giants. According to him, hyperscalers are now deploying approximately 1,000 new NVL72 server racks every week — each containing 72 Blackwell GPUs, amounting to around 72,000 new chips weekly. Microsoft alone is expected to deploy hundreds of thousands of Blackwell GPUs, primarily to power OpenAI workloads. Nvidia has already begun shipping samples of the Blackwell Ultra, which is expected to feature 50% more HBM memory and deliver significantly improved inference performance.

Nvidia also recently launched new Ethernet and InfiniBand switches based on silicon photonics technology, also developed in Israel, designed to enhance energy efficiency in data center operations.

Lattica Developed Nvidia-Based Homomorphic Encryption

[Pictured above: Lattica’s team. Credit: Lattica]

Cryptography startup Lattica has emerged from stealth after completing a $3.25 million pre-seed funding round led by the Fund Cyber venture fund of Konstantin Lomashuk. Operating under the radar for about a year and a half, Lattica was founded by CEO Dr. Rotem Tzabari, who holds a PhD in cryptography from the Weizmann Institute of Science.

The company has developed a cloud platform that enables AI models to run on encrypted data, without the need for prior decryption and without compromising the model’s inference process. This means that users and organizations can utilize AI tools—such as chatbots—while maintaining full privacy over personal and organizational data, an increasingly critical concern in sectors like healthcare, finance, government, and defense. Lattica is currently pursuing a broader funding round to accelerate its market activity. A free demo of its platform is available on the company’s website.

Performing Any Mathematical Operation on Encrypted Data

One of the greatest barriers to the continued growth of the cloud industry lies in data security weaknesses and privacy issues—a central concern in the public cloud space. These concerns have prevented sectors such as finance, insurance, healthcare, and government from large-scale cloud migration. The rise of AI applications, which are often offered via SaaS cloud services, has only intensified this dilemma.

Lattica addresses this challenge using homomorphic encryption, a method distinct from traditional encryption. While standard encryption completely obscures any correlation between the original and encrypted data, homomorphic encryption preserves the mathematical relationships between data elements even under encryption. This makes it possible to perform computations—such as running AI models—on encrypted data and obtain results that are mathematically accurate, without ever revealing the underlying information.

This form of encryption is particularly well-suited to cloud environments, as it allows the cloud to provide data processing services like analytics, AI, and machine learning without removing encryption—followed by decryption of insights only on secure, private servers.

For many years, fully homomorphic encryption (FHE)—enabling any mathematical operation to be performed on encrypted data—was considered a theoretical challenge. In 2009, Craig Gentry of Stanford University presented the first such algorithm, known as Gentry’s Scheme, which proposed a method of “encryption within encryption.” While it was a groundbreaking theoretical advancement, it was extremely slow and impractical for real-world use. Since then, more efficient algorithms have been developed, but not to the point of supporting the high-speed data transmission typical of the digital world.

Most network encryption methods, such as RSA, were developed for and executed on central processing units (CPUs). While FHE algorithms have also traditionally been CPU-based, their computational complexity exceeds the capabilities of standard CPUs to deliver timely results. One attempted solution has been the development of dedicated CPUs for FHE computation.

Rewriting the Algorithms for GPUs

Lattica is taking a different path. Mathematically, these algorithms are actually better suited to execution on graphics processing units (GPUs), due to their ability to perform parallel computations. Lattica has rewritten these algorithms specifically to run in parallel on GPUs, particularly those manufactured by NVIDIA.

In an interview with Techtime, Dr. Tzabari explained: “NVIDIA’s processors brought about a computational revolution. Their compute capabilities drove the huge progress in machine learning and AI. Realizing that these capabilities could also solve the challenge of homomorphic encryption is what led me to found Lattica. NVIDIA offers a wide array of software tools that help maximize the hardware, and we use those tools to accelerate homomorphic encryption. To properly use these accelerators, you need to rewrite the algorithms to run in parallel. We built a solution tailored for NVIDIA processors—there is no longer a need for custom hardware.”

 

OpTeamizer joins NVIDIA Metropolis to develop and optimize vision AI applications for customers worldwide

OpTeamizer, a service provider of AI development and GPU optimizations, announced that it has joined NVIDIA Metropolis, a partner program focused on bringing to market a new generation of vision AI applications. The program nurtures a rich ecosystem and offers powerful developer tools to supercharge vision AI applications that are designed to make the world’s most important spaces and operations safer and more efficient.

OpTeamizer provides end-to-end software development services for AI and HPC applications. With over 100 customers, ranging from large enterprises to start-ups, OpTeamizer services include R&D of AI using neural networks, CUDA GPU development, NVIDIA Jetson edge AI platform development, and software optimizations. The company leverages the power of NVIDIA hardware to provide its customers with innovative and advanced solutions. The services utilize cutting-edge NVIDIA technologies and frameworks such as CUDA, TensorRT, DeepStream, Triton, TAO, and more.

NVIDIA Metropolis makes it easier and more cost effective for enterprises, governments, and integration partners to use world-class AI-enabled solutions to improve critical operational efficiency and safety problems. The NVIDIA Metropolis ecosystem contains a large and growing breadth of members who are investing in the most advanced AI techniques and most efficient deployment platforms and using an enterprise-class approach to their solutions. Members can gain early access to NVIDIA platform updates to further enhance and accelerate their AI application development efforts. Further, the program offers the opportunity for members to collaborate with industry-leading experts and other AI-driven organizations.

Tomer Gal, CTO and Founder of OpTeamizer, said: “Joining NVIDIA Metropolis is a significant milestone for OpTeamizer that aligns with our vision of developing cutting-edge technologies which help organizations meet the ever-growing demand for efficient and safe operations. By leveraging the powerful tools and resources provided by NVIDIA Metropolis, we can accelerate the development of next-generation vision AI applications and provide our worldwide customers with the best possible solutions.”

OpTeamizer provides software development services for artificial intelligence and high-performance computing applications. Its services cover the entire software development life cycle, from research and design to deployment. The company has a diverse customer base that includes both large enterprises and start-ups, and it offers a range of services, such as artificial intelligence, edge AI platform development, GPU development, and software optimization. By leveraging NVIDIA hardware, OpTeamizer aims to deliver cutting-edge solutions to meet the specific needs of its customers. With over 100 customers, OpTeamizer has established a strong track record of success, making it a trusted partner for businesses looking to leverage AI and HPC technologies.

OpTeamizer assists developers to create embedded edge applications using NVIDIA Jetson Platform

OpTeamizer, an Israel-based company that builds solutions and offers AI consulting to Israeli R&D centers, has been appointed an Embedded Edge Partner by multinational NVIDIA. With its appointment, OpTeamizer will provide consulting, development, and training services to NVIDIA customers worldwide, developing edge devices using the NVIDIA Jetson Platform.

The Jetson Orin family was introduced by NVIDIA and includes a powerful System on Module (SOM): Ampere architecture GPU, ARM Cortex CPU, LPDDR5 memory, highly efficient power management, high-speed interfaces, and more. It offers distinct performance levels, energy efficiency, and form factor to suit each market and industry.

Tomer Gal, Founder and CEO of OpTeamizer: “The world of artificial intelligence is rapidly shifting towards edge devices, which are becoming smarter and more efficient, while bringing innovation to a variety of industries. Our expertise in NVIDIA GPUs will assist R&D centers in bringing a new generation of edge devices based on the Jetson Orin platform to the market quickly. The platform will serve sectors such as smart cities, autonomous vehicles, drones, AI powered carts, medical devices, and other industries. The possibilities are endless, and we will work with any R&D center interested in pursuing them.”

In 2018, NVIDIA appointed OpTeamizer as its first education service delivery partner in the Middle East and Israel. The new appointment expands OpTeamizer’s activities within the embedded systems market. The company has been authorized to train, conduct research and development of AI systems, and accelerate performance using NVIDIA GPUs and its software libraries (SDKs). OpTeamizer also provides turnkey development services to businesses that adopt GPU solutions when entering the world of AI.

Tomer Gal and his team assist businesses in migrating from the CPU to the GPU world, accelerating the performance of new GPU systems, developing AI and neural network models, optimizing neural networks, and selecting the relevant hardware for deployment. Since 2018, OpTeamizer has provided a range of professional services and turnkey projects to more than 100 R&D centers. OpTeamizer cooperates with industry leaders in homeland security, healthcare, industrial inspection, and many other sectors.

OpTeamizer was founded in 2015 by Tomer Gal, one of Israel’s leading AI experts, who participated in strategic developments of Intel Israel and General Electric Israel. Tomer has an MSc in computer science and is a member of the Israel Innovation Authority, which evaluates AI startups’ grant applications. He is also a lecturer in artificial intelligence at the Software Engineering Department of ORT Braude College of Engineering.

OpTeamizer offers professional training for developers at R&D centers on NVIDIA’s development tools for the GPU environment, including courses such as CUDA C++, CUDA Python, CUDA for multiple GPUs, Deep Learning for computer vision, and Deep Learning for multiple data types.