Mobileye Develops a “Social Intelligence” Layer for Autonomous Driving Systems

By Yohai Schwiger

During its earnings call yesterday, following the release of its financial results, Mobileye CEO Amnon Shashua revealed that the company is developing a new intelligence layer for its autonomous driving systems. The goal, he explained, is to add a level of understanding above the system’s immediate decision-making layer — the one responsible for analyzing the environment, identifying open lanes, and computing trajectories in real time. Shashua described the shift as a fundamental change in how the system understands the driving environment, stressing that “autonomous driving is not just a geometric perception problem, but a decision-making problem in a social environment.”

Shashua framed the road as a space populated by many independent agents, each making decisions that influence the behavior of others. Every driver, pedestrian, cyclist, or traffic actor is a participant in a single, interconnected system. A single action by an autonomous vehicle alters the behavior of the surrounding environment, and those reactions, in turn, feed back into the vehicle’s own decision-making. In such conditions, Shashua argued, understanding the road requires a broader perspective than simply identifying an open path or calculating an optimal trajectory.

Billions of Simulation Hours Overnight

Against this backdrop, Shashua introduced the concept of Artificial Community Intelligence, or ACI. According to him, the idea originates in academic research, but has never before been implemented at commercial scale in autonomous driving systems. “This is the first time an academic concept has been fully productized in an autonomous vehicle decision-making system,” he said. ACI is built on multi-agent reinforcement learning, aimed at understanding the dynamics of a community. The core idea is to train the system to understand how one decision affects the behavior of others, and how that chain of reactions ultimately feeds back into the vehicle itself.

This is a dimension familiar to every human driver, even if they are rarely conscious of it. A driver approaching a busy roundabout or a school pickup zone does not merely calculate available space or distance to the car ahead. Instead, they interpret the entire situation: children standing on the sidewalk, parents double-parked, impatient drivers nearby, and a general sense that the space is operating under different social rules. The decision to slow down, yield, or wait an extra moment is not derived from geometry alone, but from a deeper understanding of context and expectations.

To train such capabilities, Mobileye makes extensive use of simulation. Shashua explained that while visual perception can be trained using real-world data, planning and decision-making in a multi-agent environment require far larger volumes of data. “The sample complexity for planning is significantly higher,” he said, “because the actions you take influence the behavior of others.” The solution, according to Shashua, is large-scale simulation that allows the company to run massive numbers of scenarios. “We can reach a billion training hours overnight,” he said, highlighting the advantage of combining simulation with Mobileye’s global REM mapping infrastructure, which provides a realistic foundation for training.

Thinking Fast, Thinking Slow

The insights learned through simulation are divided by Mobileye into two cognitive layers: Fast Think and Slow Think. The Fast Think layer is responsible for real-time actions and manages the vehicle’s immediate safety. It operates at high frequencies — dozens of times per second — handling steering, braking, and lane keeping. This is a reflexive layer that runs directly on the vehicle’s hardware and cannot tolerate delays or uncertainty.

By contrast, the Slow Think layer focuses on understanding the situation in which the vehicle is operating. It does not ask only what is permitted or prohibited, but what is appropriate in a given context. Shashua described it as a layer designed to interpret the meaning of complex, non-routine scenes, rather than merely avoiding immediate hazards. For example, when a police officer blocks a lane and signals a vehicle to wait or change course, a safety system will know not to hit the officer. A contextual understanding layer, however, allows the vehicle to recognize that this is not a typical obstacle, but a human directive that requires a change in behavior.

The distinction between Fast Think and Slow Think closely echoes the fast-and-slow thinking model popularized by Nobel Prize-winning economist and psychologist Daniel Kahneman, who coined the terms to describe the difference between reflexive, automatic thinking and slower, interpretive reasoning. While Shashua did not mention Kahneman by name, the conceptual parallel is clear.

Architecturally, this translates into a clear separation between real-time systems and interpretive systems. Fast Think operates within tight control loops, while Slow Think runs at a lower cadence, providing context and high-level guidance. Its output is not a steering or braking command, but a shift in driving policy. It may cause the system to adopt more conservative behavior, avoid overtaking, or prefer yielding the right of way. The planner continues to compute trajectories in real time, but does so under a new set of priorities.

Shashua noted that because the Slow Think layer is not safety-critical and does not operate in real time, it is not bound by the same hardware constraints as the execution layer. In principle, such processing could even be performed using heavier compute resources, including the cloud, to analyze complex situations more deeply. He emphasized, however, that this does not involve moving driving decisions to the cloud, but rather expanding the system’s ability to understand situations that do not require immediate response.

Will This Extend to Mentee Robotics’ Humanoids?

The connection between ACI and the Slow Think layer is most evident during training. Simulation does not operate while the vehicle is driving, but serves as a laboratory in which the system learns social dynamics. In real time, the vehicle does not recompute millions of scenarios, but instead recognizes situations it has encountered before and applies intuition acquired through prior training. In this sense, simulation is where understanding is built, and Slow Think is where it is expressed.

The concepts Shashua outlined are not necessarily limited to the automotive domain. They may also be relevant to Mobileye’s recent expansion into humanoid robotics through its acquisition of Mentee Robotics. Like autonomous vehicles, humanoid robots operate in multi-agent environments, where humans and other machines react to one another in real time. Beyond motor control and balance, such robots must understand context, intent, and social norms. Mobileye has noted that Mentee’s robot is designed to learn by observing behavior around it, rather than relying solely on preprogrammed actions — an approach that aligns naturally with ideas such as ACI and the separation between immediate reaction and higher-level understanding.

Why Emphasize This in an Earnings Call?

This raises an obvious question: why did Shashua choose to devote so much time during an investor call to a deep technical topic focused on internal AI architecture, rather than on revenues, forecasts, or new contracts? The choice appears deliberate. On one level, it draws a clear line between Mobileye and approaches that frame autonomous driving as a problem solvable through technological shortcuts or brute-force increases in compute. Shashua sought to emphasize that the core challenge lies in understanding human interaction, not merely in object detection or trajectory planning. At the same time, the discussion serves as expectation management, explaining why truly advanced autonomous systems take time to develop and do not immediately translate into revenue. On a deeper level, it signals a broader strategy: Mobileye is not just building products or chips, but an intelligence layer for physical AI — one that could extend beyond the current generation of autonomous vehicles into adjacent domains such as robotics.

In that sense, the most technical segment of the earnings call was also the clearest business message. Not a promise for the next quarter, but an explanation of why the longer, more complex path is, in Mobileye’s view, the right one.

Nexar Challenges Mobileye and Tesla with an AI Model for Accident Prediction

Israeli startup Nexar has unveiled a new artificial intelligence model called BADAS—short for Beyond ADAS—designed as a foundational layer for next-generation safety and autonomous driving systems. The model draws on tens of billions of real-world driving kilometers captured over years by Nexar’s dashcam network, in an effort to solve one of the toughest problems in mobility: how to make AI systems understand human behavior on the road, not just react to pre-programmed situations.

Unlike traditional models trained mainly on simulations or curated datasets, BADAS was trained on a massive trove of dashcam recordings gathered from private vehicles, commercial fleets, and municipal monitoring systems worldwide. The data come not from lab conditions but from the unpredictable chaos of everyday driving—changing weather, human errors, and near-miss events that never make it into formal crash reports. This gives the model an unprecedented ability to learn from authentic behavioral patterns and real-world context.

Real-World Data Meets Tesla-Style Scale

Nexar’s approach doesn’t replace simulations—it complements them. While simulations reproduce rare or dangerous scenarios, real-world footage provides the probabilistic texture of everyday driving. On this foundation, BADAS can anticipate what might happen seconds ahead—for instance, when a car subtly drifts toward another lane or when traffic dynamics around an intersection shift unexpectedly. The result is an evolutionary step from reactive alerts to probabilistic prediction based on learned behavior.

The strategy naturally recalls Tesla’s vision-based AI, which also relies on data from millions of vehicle cameras. Both companies see large-scale visual data as the key to autonomous learning, yet their roles differ sharply. Tesla builds a closed, vertically integrated system: data, software, and vehicle. Nexar, in contrast, is a data and AI infrastructure provider, not an automaker. It collects and processes global video data, then offers predictive models as a plug-in layer for others—automakers, fleet operators, insurers, and cities. Its ambition is to create a kind of “AI roadway infrastructure,” a shared computational foundation for safety and prediction.

Predicting a Crash 4.9 Seconds Ahead

Alongside the launch, Nexar published a research paper titled “BADAS: Context-Aware Collision Prediction Using Real-World Dashcam Data” on arXiv, detailing the model’s scientific principles. The paper redefines accident prediction as a task centered on the ego vehicle—the driver’s own car—rather than external incidents captured by chance.

Two model versions were introduced: BADAS-Open, trained on about 1,500 public videos, and BADAS 1.0, trained on roughly 40,000 proprietary clips from Nexar’s dataset. The study found that in many existing datasets, up to 90% of labeled “accidents” are irrelevant to the camera vehicle—necessitating new annotation work using Nexar’s richer data. The outcome was striking: the model predicted collisions an average of 4.9 seconds before they occurred, a major improvement over earlier visual prediction systems. Integrating real-world data with a new learning architecture known as V-JEPA2 further enhanced both accuracy and stability.

From Dashcams to a Global Data Engine

Until now, Nexar’s business revolved around consumer dashcams, but the company was quietly building a global road-data network—tens of millions of driving hours from hundreds of thousands of cars. BADAS marks the transformation of that dataset into a commercial product.

The model is expected to serve multiple industries: carmakers could license it for driver-assistance systems (ADAS); insurers might use it for context-based risk assessment; and municipalities could deploy it for real-time detection of hazards, crashes, or congestion. Nexar plans to make BADAS accessible through API and SDK interfaces, enabling partners to build custom services around it. Its greatest advantage lies in scale: every new vehicle connected to Nexar’s network enhances the model’s predictive power—a classic flywheel effect.

A Bridge Between Sensing and Intelligence

By launching BADAS, Nexar positions itself between hardware-driven vision firms like Mobileye and cloud-AI platforms that power advanced models. It aims to be the bridge connecting ground-truth data with the AI systems that interpret complex human motion and decision-making on the road. The move also reflects a broader automotive trend—shifting from costly sensors like LiDAR to scalable, cloud-based visual intelligence.

If Nexar’s model delivers on its promise—accurate real-time hazard prediction across diverse environments—it could become a core AI layer for global road safety, underpinning not only autonomous vehicles but also commercial fleets and smart-city systems worldwide.

Mobileye Begins Developing the EyeQ8 Chip for “Mind-Off” Driving

Mobileye Global Inc. (NASDAQ: MBLY) reported third-quarter 2025 revenue of $504 million, up about 4% year-over-year, with a non-GAAP net profit of $76 million and a GAAP net loss of $96 million.
The company raised its full-year outlook, guiding for $1.85–1.88 billion in revenue and up to $286 million in adjusted operating income — reflecting 2% and 11% increases over previous forecasts.

According to the company, the revision stems from stronger-than-expected performance in China and Europe. Demand for its EyeQ6-based ADAS systems continues to rise among Chinese automakers preparing for new model launches. In Europe, Mobileye announced a new collaboration with Bentley Motors, integrating advanced driver-assistance systems into upcoming luxury models. It marks one of Mobileye’s first deployments within a high-end European brand under the Volkswagen Group, serving as a template for additional VW marques.

China as a Challenge, India as the Next Frontier

During the earnings call, CEO Amnon Shashua highlighted that results were “better than expected in China, both from shipments to Chinese OEMs and from the performance of Western OEMs operating in the country.”
Still, the company faces pricing pressure in that market. CFO Moran Shemesh noted that “the average selling price of EyeQ chips declined by about $0.50 year-over-year, primarily due to higher Chinese OEM volumes, where pricing remains a significant headwind.”

At the same time, India is emerging as a major growth engine. “The growth potential in India is becoming increasingly clear — driven by stronger adoption trends and a supportive regulatory environment,” Shashua said. EVP Nimrod Nehushtan added that India will soon join the company’s REM network, which crowdsources road data from more than seven million vehicles worldwide.

Mobileye emphasized that it is now transitioning from advanced driver-assistance systems (ADAS) to fully autonomous capabilities, led by its IQ6 High and Surround ADAS platforms.
“Demand for higher performance at lower cost is intensifying,” said Shashua. “The IQ6 High delivers performance comparable — and in many cases superior — to Nvidia’s Orin X, at less than one-quarter of the price.”
The system combines multiple cameras and radar sensors to enable hands-free highway driving, and the company recently announced a second major Western OEM win for the technology — underscoring its growing appeal in mass-market vehicles.

EyeQ8 and the Era of “Mind-Off” Autonomy

Beyond near-term numbers, Shashua used the call to outline Mobileye’s longer-term technological vision. The company has begun development of its EyeQ7 and EyeQ8 chips — designed to push autonomy from “eyes-off” (where a human or teleoperator still serves as backup) to “mind-off,” where no human intervention is required.

“In mind-off driving, the driver can sleep — the robotaxi no longer needs a teleoperator,” he explained. “The EyeQ7 and EyeQ8 don’t replace the EyeQ6; they add a new layer on top of it. We need AI that can understand a scene like a human being — to perceive context, not just objects.”

According to Shashua, EyeQ chips follow a two-year development cadence. The upcoming EyeQ8, now in design, will be three to four times more powerful than the EyeQ7 and form the backbone of Mobileye’s mind-off systems targeted for 2029–2030.

Robotaxis and the German Testbed

Commercially, the company is preparing to remove safety drivers from its first U.S. robotaxi fleet in the first half of 2026, in partnership with Lyft, Volkswagen, and Holland (a Benteler division). In Europe, Mobileye is working with Volkswagen to secure homologation in Germany — a key regulatory milestone. “Germany’s government has made clear it wants to lead Europe in autonomous driving,” said Nehushtan, describing strong public and political support for the initiative.

Mobileye Achieves First Commercial Win for Its Radar in Autonomous Driving System

After seven years of development, Mobileye has secured the first commercial win for its imaging radar system: a major global automaker has selected the radar to serve as a core component in its autonomous driving platform. The decision follows over a year of comparative evaluations, in which the system competed head-to-head with rival technologies. The customer plans to integrate the radar into an SAE Level 3 autonomous driving system starting in 2028. This system will support hands-free driving on highways, with the ability to detect vehicles, objects, obstacles, and pedestrians.

Mobileye originally began developing the radar in 2018 with the goal of providing redundancy for its camera-based autonomous driving system. Most traditional automotive radars offer data about object distance, relative velocity, and horizontal positioning. However, Mobileye’s radar belongs to a new class of 4D imaging radars, which capture that same data in both horizontal and vertical planes, enabling a three-dimensional understanding of the environment over time.

The system is built around a radar-on-chip (SoC) processor developed entirely in-house by Mobileye, capable of delivering up to 11 TOPS of compute. It features a Massive MIMO-based transmit-and-receive architecture implemented using proprietary RFIC components. These components handle signal transmission and reception, convert the analog signals into digital form, and send the data to the radar’s main processor. The system supports more than 1,500 virtual channels and operates at a rate of 20 frames per second. The radar antenna provides a wide 170-degree field of view and sub-0.5-degree angular resolution.

From Backup Sensor to Central Sensing System

Mobileye’s radar is designed to serve three core functions in the vehicle: ensuring reliable sensing in poor environmental conditions that impair camera performance, enriching the scene understanding provided by cameras, and acting as a full fallback system in case of a camera failure. In effect, the radar is capable of replicating all camera-based functions to ensure uninterrupted autonomous driving.

According to the company, the radar can detect small objects at safe distances even when the vehicle is traveling at speeds of up to 130 km/h (about 81 mph). In such scenarios, the radar can identify pedestrians and cyclists at a range of around 315 meters, and even smaller hazardous obstacles at distances of approximately 250 meters.

Mobileye is currently traded on Nasdaq with a market capitalization of roughly $12 billion.

Mobileye revenue decreased 23%

Mobileye announced that its revenue decreased 23% year over year to $490 million in the fourth quarter, as compared to the fourth quarter of 2023. The main reason the this weak quarter is a 20% reduction in EyeQ SoC volumes. This was primarily related to the previously disclosed meaningful build-up of inventory at our Tier 1 customers, including in the fourth quarter of 2023. Average System Price was $50.0 in fourth quarter 2024, compared to $52.7 in the prior year period primarily due to lower percentage of SuperVision related revenue as compared to the fourth quarter of 2023.

Operating Margin of (18%) decreased by 29 percentage points in the fourth quarter of 2024 as compared to the prior year period, due to higher operating expenses than the prior year period on a lower revenue base, as well as the gross margin. Mobileye annual 2024 sales totaled $1.65 billion, compared with $2.08 in 2023. The company generated net cash of $400 million in 2024. Its balance sheet is strong with $1.4 billion of cash and cash equivalents and zero debt. Mobileye expects to return to growth this year. Its financial guidance is $1.69-1.81 billion sales in 2025.

Mobileye President and CEO Prof. Amnon Shashua, said the company achieved major technological milestones: “EyeQ6 High System-on-Chip (SoC) is on-track for series production launch and achieves 10x the frame-per-second processing in comparison to EyeQ5 High. We look forward to a robust cadence of EyeQ6 High-based product launches beginning in 2026”.

He revealed that Mobileye imaging radar B-samples achieved outstanding performance across hundreds of OEM tests. “Most importantly, we progressed significantly on the SuperVision, Chauffeur, and Drive projects for VW Group, achieving milestones on the path to start-of-production.”

Mobileye to use Innoviz LiDAR for its AV Platform

Above: Mobileye’s robotaxi in its HQ campus in Jerusalem

Mobileye and Innoviz Technologies, announced that Mobileye will use Innoviz’s LiDARs for Mobileye Drive, its AV platform. Mobileye Drive is a comprehensive driverless system that enables to provide robotaxis, ride-pooling, public transport, and goods delivery fully autonomous. It is now under comprehensive tests in EuropeNorth America, and Asia. Innoviz’s LiDAR technology will join the current cameras, radars, and imaging radars in this platform. The agreement is built upon mutual work between the two companies over the past few months, with Start of Production (SOP) beginning in 2026.

“The integration of our imaging radars and high-resolution cameras in combination with the Innoviz LiDARs will play a key role in delivering Mobileye Drive,” said Prof. Amnon Shashua, President and CEO of Mobileye. Innoviz’s InnovizTwo product platform specifically engineered for Mobileye Drive to provide the L4 autonomous platform with a complete set of LiDARs.

“Better-than-expected cost reduction”

The agreement was signed shortly after Mobileye had eneded the internal development of Frequency Modulated Continuous Wave (FMCW) LiDARs in September 2024. MobilEye explained the descision: “We now believe that the availability of next-generation FMCW lidar is less essential to our roadmap for eyes-off systems. This decision was based on a variety of factors, including substantial progress on our EyeQ6-based computer vision perception, increased clarity on the performance of our internally developed imaging radar, and continued better-than-expected cost reductions in third-party lidar units.”

The lidar R&D unit will be wound down by the end of 2024, affecting about 100 employees. Operating expenses for the lidar R&D unit are expected to total approximately $60 million in 2024 (including approximately $5 million related to share-based compensation expenses). While this action is not expected to have a material impact on Mobileye’s results in 2024, it will result in the avoidance of lidar development spending in the future.

P3 to use Mobileye Drive for its robotaxis

Mobileye and the Croatia-based Project 3 Mobility (P3), announced a collaboration to explore a new mobility service, utilizing Mobileye’s scalable self-driving technology, Mobileye Drive. The first P3 service is aimed to be launched in Zagreb in 2026, with testing and validation of Mobileye’s AV solution on the streets of the Croatian capital targeted to start in 2024.

Project 3 Mobility is developing a fully autonomous electric vehicle for urban mobility ecosystem and the needed specialised infrastructure and mobility services. The vehicle is built on a completely new platform designed for fully autonomous driving. The project will create a new mobility service in the wider area of Zagreb based on the concept of “Mobility as a Service” (MaaS). Project 3 Mobility is about to establish a production facility in Croatia for the large-scale production of autonomous vehicles that will be deployed worldwide.

Mobileye technology will be integrated into the P3 vehicle, which will use the Mobileye Drive autonomous driving solution. Currently, Project 3 Mobility has a team of more than 240 people with experts from more than 20 different industries and nationalities in two offices – in Croatia and the UK. P3 has already signed agreements with 9 cities across the EU, UK and the Gulf Cooperation Council to provide its urban autonomous service.

Project 3 Mobility plans to invest at least EUR 350 million before going to market. In May 2023 it secured a 179,5 million euros grant from the European Commission, and in the beginning of February 2024 P3 had closed a 100 million euros Series A investment round. Among the current investors: Kia, SiteGround  and Rimac Group.

Jerusalem-based Mobileye has developed an autonomous driving and driver-assistance technologies. Today, more than 170 million vehicles worldwide have been built with Mobileye technology inside. Its 2023 annual revenue tota;;ed $2.08 billion- 11.24% growth year-over-year. The company’s Future business backlog continues to grow, with 2023 design wins projected to generate future revenue of $7.4 billion across 61 million units.