Nexar Challenges Mobileye and Tesla with an AI Model for Accident Prediction

Israeli startup Nexar has unveiled a new artificial intelligence model called BADAS—short for Beyond ADAS—designed as a foundational layer for next-generation safety and autonomous driving systems. The model draws on tens of billions of real-world driving kilometers captured over years by Nexar’s dashcam network, in an effort to solve one of the toughest problems in mobility: how to make AI systems understand human behavior on the road, not just react to pre-programmed situations.

Unlike traditional models trained mainly on simulations or curated datasets, BADAS was trained on a massive trove of dashcam recordings gathered from private vehicles, commercial fleets, and municipal monitoring systems worldwide. The data come not from lab conditions but from the unpredictable chaos of everyday driving—changing weather, human errors, and near-miss events that never make it into formal crash reports. This gives the model an unprecedented ability to learn from authentic behavioral patterns and real-world context.

Real-World Data Meets Tesla-Style Scale

Nexar’s approach doesn’t replace simulations—it complements them. While simulations reproduce rare or dangerous scenarios, real-world footage provides the probabilistic texture of everyday driving. On this foundation, BADAS can anticipate what might happen seconds ahead—for instance, when a car subtly drifts toward another lane or when traffic dynamics around an intersection shift unexpectedly. The result is an evolutionary step from reactive alerts to probabilistic prediction based on learned behavior.

The strategy naturally recalls Tesla’s vision-based AI, which also relies on data from millions of vehicle cameras. Both companies see large-scale visual data as the key to autonomous learning, yet their roles differ sharply. Tesla builds a closed, vertically integrated system: data, software, and vehicle. Nexar, in contrast, is a data and AI infrastructure provider, not an automaker. It collects and processes global video data, then offers predictive models as a plug-in layer for others—automakers, fleet operators, insurers, and cities. Its ambition is to create a kind of “AI roadway infrastructure,” a shared computational foundation for safety and prediction.

Predicting a Crash 4.9 Seconds Ahead

Alongside the launch, Nexar published a research paper titled “BADAS: Context-Aware Collision Prediction Using Real-World Dashcam Data” on arXiv, detailing the model’s scientific principles. The paper redefines accident prediction as a task centered on the ego vehicle—the driver’s own car—rather than external incidents captured by chance.

Two model versions were introduced: BADAS-Open, trained on about 1,500 public videos, and BADAS 1.0, trained on roughly 40,000 proprietary clips from Nexar’s dataset. The study found that in many existing datasets, up to 90% of labeled “accidents” are irrelevant to the camera vehicle—necessitating new annotation work using Nexar’s richer data. The outcome was striking: the model predicted collisions an average of 4.9 seconds before they occurred, a major improvement over earlier visual prediction systems. Integrating real-world data with a new learning architecture known as V-JEPA2 further enhanced both accuracy and stability.

From Dashcams to a Global Data Engine

Until now, Nexar’s business revolved around consumer dashcams, but the company was quietly building a global road-data network—tens of millions of driving hours from hundreds of thousands of cars. BADAS marks the transformation of that dataset into a commercial product.

The model is expected to serve multiple industries: carmakers could license it for driver-assistance systems (ADAS); insurers might use it for context-based risk assessment; and municipalities could deploy it for real-time detection of hazards, crashes, or congestion. Nexar plans to make BADAS accessible through API and SDK interfaces, enabling partners to build custom services around it. Its greatest advantage lies in scale: every new vehicle connected to Nexar’s network enhances the model’s predictive power—a classic flywheel effect.

A Bridge Between Sensing and Intelligence

By launching BADAS, Nexar positions itself between hardware-driven vision firms like Mobileye and cloud-AI platforms that power advanced models. It aims to be the bridge connecting ground-truth data with the AI systems that interpret complex human motion and decision-making on the road. The move also reflects a broader automotive trend—shifting from costly sensors like LiDAR to scalable, cloud-based visual intelligence.

If Nexar’s model delivers on its promise—accurate real-time hazard prediction across diverse environments—it could become a core AI layer for global road safety, underpinning not only autonomous vehicles but also commercial fleets and smart-city systems worldwide.

Nexar Dash Cams help in training autonomous cars

Tel Aviv-based Nexar reported an increase of 30% in the usage of their smart car cameras (Dash Cams). As of today, 210 million kilometers are being filmed and recorded monthly, and a total of 3.2 trillion road pictures have been gathered.  The company also estimates that the scope of its cameras usage will double in the next 12 months.

Nexar was founded in 2015 by CEO Eran Shir and CTO Bruno Fernandez-Ruiz, and currently employs about 100 people – about 80 in Israel and the others in New York.

The company developed GPS-based dash cams, connected to the Smartphone through designated app, which allows recording and providing insights regarding the road users. The app takes advantage of the sensors and processing capabilities of the Smartphone in order to provide safety alerts to the driver, such as deflection from the driving lane or danger of collision.

Main feature of the app is the transmission of information to the cloud, which in turn, through V2V (Vehicle-to-Vehicle) network, it provides alerts to other drivers around, alerting them of dangers beyond their field of vision that they can’t directly notice.

At the same time, the rapid development of smart mobility and autonomous driving are expanding the target markets for the data collected by the camera. “The data collected enables automakers to train the cars they manufacture and keep their maps up-to-date in real time to accommodate for road changes”, said CEO Shir. Currently, the company provides information to companies which develop autonomous driving systems, regarding edge cases of driving scenarios, in order to train their systems and meet the 4 & 5 level requirements. This area is crucial – but lacks the sufficient data. All this leads to a new product launched by the company six months ago: a service which provides drivers and insurance companies with a detailed reconstruction of the actions of all parties involved in a collision. This service is provided using raw camera data, on-board sensors, and deep learning AI algorithms. 

Using this method, data is collected from the dash cam and several sensors, integrated in accurate road maps, and reconstructs the collision developments step by step, including car behavior and the hit moments. Currently, Nexar started to provide this service to hundreds of thousands of the Japanese insurance company Mitsui Sumitomo Insurance customers.

Another new service provided by Nexar, is a highly detailed road maps for the autonomous car manufacturers. Today, most dynamic road maps are produced through LiDAR technology, which support relatively low detailing level. The company announced lately that its dash cams provide up-to-date details that are easy to incorporate in existing road maps.

These up-to-date details can also be used for road maintenance. Recently, Nexar and U.S.-based Blynscy announced a partnership to collect data on 6.5 million kilometers of public roads. This data will be used to locate road damages and transfer the information to the authorities. National studies show that effective marking of roadways saves lives. Over 50 percent of fatalities on America’s roadways — roughly 19,000 deaths annually — result when motorists leave their travel lanes, according to federal data. Most pavement markings are repainted only once a year, but with harsh weather conditions and degrading asphalt pavement markings oftentimes need to be repainted or maintained more often.