NVIDIA’s Driving Model Poses a Challenge to Mobileye

By Yohai Schweiger

While NVIDIA’s Rubin platform for next-generation AI infrastructure captured most of the attention at CES 2026 in Las Vegas last week, the company quietly unveiled another move with potentially far-reaching strategic implications for the automotive industry: the launch of Alpamayo, an open foundation model for autonomous driving designed to serve as the planning and decision-making layer in future driving systems.

The announcement is expected to influence not only how autonomous driving systems are developed, but also the balance of power among technology suppliers in the automotive value chain — with particular implications for Israeli auto-tech companies.

Most Israeli players, including sensor makers Innoviz and Arbe, as well as simulation and validation specialists Cognata and Foretellix, do not provide full vehicle systems but rather core components within the broader stack. For them, NVIDIA’s move could prove supportive. By contrast, the availability of an open and flexible planning model that allows automakers to assemble software-hardware stacks around a unified computing platform poses a strategic challenge to Mobileye, which has built its market position around a vertically integrated, end-to-end solution and full system responsibility.

NVIDIA DRIVE: An AI-First Ecosystem for Automotive

Alpamayo now joins the broader set of solutions NVIDIA groups under its NVIDIA DRIVE platform — a comprehensive ecosystem for developing intelligent vehicle systems. DRIVE includes dedicated automotive processors such as Orin and Thor, an automotive operating system, sensor data processing and fusion tools, simulation platforms based on Omniverse and DRIVE Sim, and cloud infrastructure for training and managing AI models. In other words, it is a full-stack platform designed to support automakers from development and validation through real-time deployment on the vehicle itself.

This aligns with NVIDIA’s broader push toward an AI-first vehicle stack — shifting away from systems built primarily around hand-crafted rules and task-specific algorithms toward architectures where large AI models become central components, even in layers traditionally handled by “classical” algorithms, such as decision-making.

In this context, Alpamayo plays a strategic role. For the first time, NVIDIA is offering its own foundation model for planning and decision-making, effectively re-centering the DRIVE platform around an end-to-end AI-driven architecture — from cloud training to execution on the in-vehicle computer.

The Vehicle’s Tactical Brain

Alpamayo is a large multimodal Vision-Language-Action (VLA) model that ingests data from multiple video cameras, LiDAR and radar sensors, as well as vehicle state information, and converts it into an internal representation that enables reasoning and action planning. Based on this, the model generates a future driving trajectory several seconds ahead. It does not directly control actuators such as steering or braking, but it determines the vehicle’s tactical behavior.

Unlike general-purpose language models, Alpamayo operates in a physical environment and combines perception with spatial and contextual reasoning. Its inputs include video sequences, motion data, and in some cases maps and navigation goals. The model performs scene understanding, risk assessment, and path planning as part of a single decision chain. Its primary output is a continuous trajectory passed to the vehicle’s classical control layer, which handles physical actuation and safety constraints.

Training such a model relies on a combination of real-world data and massive amounts of synthetic data generated using NVIDIA’s simulation platforms, Omniverse and DRIVE Sim.

The model is released as open source, including weights and training code, allowing automakers and Tier-1 suppliers to retrain it on their own data, adapt it to their system architectures, and integrate it into existing stacks — not as a closed product, but as a foundation for internal development. NVIDIA has also announced partnerships with industry players including Lucid Motors, Jaguar Land Rover (JLR), Uber, and research collaborations such as Berkeley DeepDrive to explore advanced autonomous driving technologies using Alpamayo.

Mobileye: A Challenge to the Full-Stack Model

An autonomous driving stack typically consists of several layers: sensors, perception, planning and decision-making, and control. Alpamayo sits squarely in the planning layer. It does not replace perception, nor does it replace safety-critical control systems — but it does replace, or at least challenge, the traditional algorithmic decision-making layer.

This enables a more modular system design: perception from one supplier, planning from NVIDIA’s model, and control from another Tier-1. This represents a conceptual shift away from closed, end-to-end “black box” solutions.

That is where the tension with Mobileye emerges. For years, Mobileye has offered a nearly complete stack — sensors, perception, mapping, planning, and proprietary EyeQ chips running the entire system with high energy efficiency. This model fits well with ADAS and L2+ systems, and even more advanced autonomous configurations.

However, foundation models for planning shift the balance. They require more flexible and powerful compute than dedicated ADAS chips typically provide, pushing architectures toward GPU-based computing.

While in some scenarios Mobileye perception components can be integrated into broader stacks, most of the company’s advanced autonomy solutions are offered as tightly integrated system units, which in practice limits the ability to swap out individual layers. Moreover, the very presence of an open planning model weakens the value proposition of proprietary planning software. Instead of developing or licensing dedicated planning algorithms, automakers can adapt an existing foundation model to their own data and operational requirements.

This is not an immediate threat to Mobileye’s core business, but over the longer term — as the market moves toward L3 and L4 autonomy and the decision layer becomes increasingly AI-driven — it represents a genuine strategic challenge to the closed, end-to-end model.

That said, Mobileye retains a significant structural advantage: it delivers a complete system and assumes full responsibility for safety and regulatory compliance. For many automakers, especially those without deep in-house AI and software capabilities, this is critical. They prefer a single supplier accountable for system performance rather than assembling and maintaining a complex “puzzle” of components from multiple vendors, with fragmented liability and higher regulatory risk.

Innoviz and Arbe: Sensors Gain Strategic Importance

For Israeli sensor suppliers such as Innoviz and Arbe, NVIDIA’s move could be distinctly positive. Advanced planning models benefit from rich, reliable, multi-sensor input. LiDAR provides precise three-dimensional geometry and depth, while advanced radar excels at detecting objects in poor lighting and adverse weather conditions.

This sensor data is essential for planning layers and decision-making models operating in dynamic physical environments. As a result, both companies are positioning themselves as part of NVIDIA’s ecosystem rather than alternatives to it. Both have demonstrated integration of their sensing and perception pipelines with NVIDIA’s DRIVE AGX Orin computing platform.

In a stack where decision-making becomes more computationally intensive and AI-driven, the value of high-quality sensing only increases. No matter how advanced the model, limited input inevitably leads to limited decisions.

Cognata and Foretellix: Who Verifies AI Safety?

Another layer gaining importance is simulation, verification and validation — where Israeli firms Cognata and Foretellix operate.

Cognata focuses on building synthetic worlds and complex driving scenarios for training and testing, while Foretellix provides verification and validation tools that measure scenario coverage, detect behavioral gaps, and generate quantitative safety metrics for regulators and safety engineers.

As AI models become central to driving stacks, the need for scenario-based safety validation grows, beyond simply accumulating road miles.

Both companies are aligned with NVIDIA’s simulation-centric development approach. Cognata integrates with DRIVE simulation and Hardware-in-the-Loop environments (where real vehicle hardware is connected to virtual scenarios) for large-scale testing, while Foretellix connects its validation tools to Omniverse and DRIVE to assess AI-based driving systems under diverse physical conditions.

Open Source, Semi-Closed Platform

Although Alpamayo is released as open source, it is deeply optimized for NVIDIA’s hardware platforms. Optimization for CUDA, TensorRT, and low-precision compute enables real-time execution on DRIVE computers, which are architecturally closer to GPUs than to traditional ADAS chips.

This fits into NVIDIA’s broader open-model strategy: the company releases open models for robotics, climate science, healthcare and automotive — but after deep optimization for its own computing platforms. The approach enables broad ecosystem adoption while preserving a performance advantage for those building on NVIDIA hardware.

In practice, this allows NVIDIA to expand AI into physical industries while shaping the computing infrastructure those industries will rely on.

A Threat to One Model, an Opportunity for Others

NVIDIA’s driving model does not herald an immediate transformation on public roads, but it does signal a deeper shift in how the automotive industry approaches autonomy: fewer hand-crafted rules, more general AI models, more in-vehicle compute, and heavier reliance on simulation and validation.

For much of the Israeli auto-tech sector — sensor providers, simulation vendors and validation specialists — this trajectory aligns well with existing products and strategies, and could accelerate adoption and partnerships within the DRIVE ecosystem. For Mobileye, by contrast, it signals the emergence of an alternative path to building the “driving brain” — one that does not necessarily rely on a closed, vertically integrated stack.

If autonomous driving once appeared destined to be dominated by a small number of players controlling entire systems, NVIDIA’s move points toward a more modular future — with different layers supplied by different vendors around a central AI platform. At least in the Israeli auto-tech landscape, many players appear well positioned for that scenario.

Fortellix and Voxel51 Partner on Advanced 3D Reconstruction for Autonomous-Driving Training

Israeli company Fortellix and U.S.-based Voxel51 have announced a new partnership aimed at accelerating the training and verification of autonomous-vehicle (AV) systems. Together, they are introducing an end-to-end workflow that transforms raw driving logs into editable, AI-generated 3D scenes that can be reconstructed, manipulated and deployed at scale in simulation environments. The collaboration leverages advanced neural-reconstruction techniques and visual-data processing to offer AV developers a powerful new tool for improving training, testing and validation workflows.

At the heart of the joint effort are drive logs—rich datasets collected from autonomous and semi-autonomous vehicles during real-world operation. These logs typically include video, lidar, radar, GPS, IMU measurements, vehicle-system status and annotated objects. For years, such logs have been indispensable for perception models, yet inherently limited: they capture only what actually occurred on the road. Fortellix and Voxel51 aim to convert these raw logs into full 3D reconstructions that can be expanded far beyond reality, enabling the creation of synthetic variations, edge cases and stress scenarios that cannot be consistently captured in physical testing.

In the joint workflow, Fortellix first classifies and analyzes the driving data, identifying coverage gaps relative to the vehicle’s ODD (operational design domain). Voxel51 then processes the raw inputs—cleaning noise, performing consistency checks, cross-sensor alignment and contextual interpretation—to prepare the material for AI reconstruction. Their combined pipeline draws on 3DGS capabilities and advanced rendering technologies to create realistic, editable 3D scenes. Fortellix then re-enters the loop, generating controlled variations of each scenario, modifying environmental elements, injecting external events and producing synthetic sensor data that mimics real-world output. The final stage is carried out in Voxel51’s visualization and analytics platform, ensuring that the mixed real-and-synthetic dataset meets rigorous quality standards for model training.

Voxel51, a major player in computer vision, is headquartered in Michigan and specializes in large-scale visual-data management. Its flagship product, FiftyOne, allows teams to deeply inspect sensor datasets, uncover labeling errors, assess data quality and detect hidden patterns. Combined with Fortellix’s expertise in AV scenario simulation, the partnership creates a seamless technological chain—from recorded reality to layered, simulation-ready virtual environments.

The implications are significant. Instead of relying solely on costly, months-long field testing that often misses rare events, AV developers can now synthesize edge cases on demand, recreate unusual incidents, tweak environmental parameters—lighting, weather, traffic—and slow down or accelerate events for analysis. For teams working on perception, planning and prediction algorithms, this represents a paradigm shift: hundreds of scenario variations can be generated from a single logged moment, exposing model weaknesses and enabling rapid iteration without sending another vehicle onto the road.

Beyond product development, the collaboration may influence how the AV industry approaches regulatory validation. Authorities increasingly require evidence of ODD coverage, robustness against edge cases and consistent behavior in complex scenarios. If real-world scenes can be faithfully reconstructed and expanded with controlled synthetic variations, validation could become more systematic, transparent and comprehensive.

Ultimately, the Fortellix–Voxel51 partnership reflects a sweeping industry trend: the shift from relying solely on raw real-world data to a blended model where rich virtual reality complements physical driving. Instead of learning only from what has happened, AV systems can now be tested against what could happen. For autonomous-driving developers, this promises higher safety, improved robustness and shorter development cycles—bringing the industry closer to vehicles that can reliably navigate the full complexity of the real world.