Cognata’s Surprising Military Box

By Yohai Schwiger

Cognata, a Rehovot-based company best known for developing simulators for training and testing autonomous driving systems, has unveiled a surprising new product that marks a strategic pivot into the defense market—an arena entirely new for the company. Until recently, Cognata focused almost exclusively on commercial automotive customers. But at CES this month in Las Vegas, it introduced AVBox: a compact hardware-software kit that enables standard military vehicles, both light and heavy, to operate autonomously in off-road environments.

AVBox is a self-contained unit mounted on the vehicle’s roof, integrating a sensor suite, onboard computing, and vision- and navigation-focused algorithms. It is designed specifically for off-road scenarios—unstructured terrain without lanes, traffic signals, or civilian infrastructure—and is built around predefined, mission-oriented autonomy. Rather than full autonomy, the system supports limited operational autonomy: independently navigating a vehicle over several kilometers, primarily for logistical and operational tasks, even in the absence of GPS, digital maps, or continuous communication with a remote operator.

Bridging Remote Control and Autonomy

This is not a prototype but a finished commercial product. AVBox is already undergoing advanced evaluation by a potential customer, following pilot trials in which it was installed on operational armored vehicles to convert them for remote or unmanned operation—particularly relevant for dangerous or logistics-heavy missions. Cognata’s solution effectively bridges the gap between remote control and autonomy. While many military vehicles today rely on remote operation, loss of communication—due to terrain, interference, or distance—often renders them immobile.

In such cases, AVBox can assume control and autonomously complete the mission based on predefined objectives, ensuring operational continuity without requiring full, always-on autonomy. Similar concepts are already being applied in Ukraine’s drone warfare. Designed for easy integration into existing platforms, AVBox enables rapid upgrades of current fleets without lengthy and costly development or procurement cycles. Cognata is actively marketing the system in Europe and the United States, which is why it chose to debut AVBox at CES.

AVBox is offered in three sensing configurations. The base “Scout” version relies on daytime cameras. A more advanced variant adds thermal cameras for night operations, while the most sophisticated configuration incorporates LiDAR to handle complex terrain. The system’s modular design allows customers to tailor it to mission requirements, terrain conditions, and budget constraints, with an emphasis on broad deployment rather than limited, high-cost use.

A Military Product Born from a Civilian Demo

The transition to an operational autonomous system emerged from two parallel developments in recent years. On one hand, Cognata’s simulator expanded into off-road driving domains, including robotic training and open-terrain simulation. Initially, customers for these capabilities came primarily from civilian sectors such as agriculture and mining, but over the past two years, the company has also begun supplying them to the Israeli military. This exposure highlighted real-world defense needs and the gap between available technology and operational requirements.

Shay Rotman, Cognata’s Vice President of Business Development and Marketing, told Techtime that some automotive OEMs were skeptical about the reliability of simulation and synthetic data for training computer-vision systems. “To prove that the technology works,” Rotman said, “we developed a computer-vision stack in 2023 that included sensors, processors, and algorithms. Initially, it was meant as a technological demo to show potential customers how advanced vision systems could be built using purely synthetic data.”

As part of that effort, Cognata developed a monocular depth technology capable of estimating distances and 3D structure from a single image. The system was trained on hundreds of thousands of images generated entirely in simulation. “It started out as a response to our own pain point and a way to prove the value of our simulation,” Rotman said. “At some point, we realized we had the foundation for a completely new type of system.”

According to Rotman, the decision to focus on limited, mission-oriented autonomy rather than full autonomy was also shaped by lessons learned from the automotive sector. “Civilian automotive aims for full autonomy, but that’s still far off and requires massive resources. In the military—especially in off-road driving—you can solve real problems today with task-focused autonomy.” That philosophy also influenced pricing. “In many militaries, there’s a gap between development and procurement because advanced systems are so expensive,” he said. “We wanted to offer something affordable enough for wide deployment—not a system that remains stuck in the experimental phase.”

A Growing Market Offsetting a Stalled One

AVBox emerges against the backdrop of a challenging reality in Cognata’s original market. Global investment in automotive technology has declined steadily in recent years, while major carmakers—particularly in Europe and the United States—face mounting business pressures. “Auto-tech today is a tough market,” Rotman said. “Many manufacturers are dealing with competitive and financial challenges and are investing less in autonomous driving development.”

As a result, Cognata significantly reduced its workforce over the past two years, adapting to delayed projects, slower investment cycles, and the prolonged path to commercialization in civilian automotive. “It’s a heavily regulated market with long development cycles and very high costs,” Rotman noted. “Not every company can keep moving at the same pace as before.”

With AVBox, Cognata is now positioning itself as a company operating in two parallel domains: simulation, where it continues to support the development and training of autonomous systems, and operational autonomy tailored to clearly defined defense needs. “We’ve moved from being a pure simulation provider to a company with two products,” Rotman concluded. “And in defense, there is openness, demand, and a genuine opportunity today.”

NVIDIA’s Driving Model Poses a Challenge to Mobileye

By Yohai Schweiger

While NVIDIA’s Rubin platform for next-generation AI infrastructure captured most of the attention at CES 2026 in Las Vegas last week, the company quietly unveiled another move with potentially far-reaching strategic implications for the automotive industry: the launch of Alpamayo, an open foundation model for autonomous driving designed to serve as the planning and decision-making layer in future driving systems.

The announcement is expected to influence not only how autonomous driving systems are developed, but also the balance of power among technology suppliers in the automotive value chain — with particular implications for Israeli auto-tech companies.

Most Israeli players, including sensor makers Innoviz and Arbe, as well as simulation and validation specialists Cognata and Foretellix, do not provide full vehicle systems but rather core components within the broader stack. For them, NVIDIA’s move could prove supportive. By contrast, the availability of an open and flexible planning model that allows automakers to assemble software-hardware stacks around a unified computing platform poses a strategic challenge to Mobileye, which has built its market position around a vertically integrated, end-to-end solution and full system responsibility.

NVIDIA DRIVE: An AI-First Ecosystem for Automotive

Alpamayo now joins the broader set of solutions NVIDIA groups under its NVIDIA DRIVE platform — a comprehensive ecosystem for developing intelligent vehicle systems. DRIVE includes dedicated automotive processors such as Orin and Thor, an automotive operating system, sensor data processing and fusion tools, simulation platforms based on Omniverse and DRIVE Sim, and cloud infrastructure for training and managing AI models. In other words, it is a full-stack platform designed to support automakers from development and validation through real-time deployment on the vehicle itself.

This aligns with NVIDIA’s broader push toward an AI-first vehicle stack — shifting away from systems built primarily around hand-crafted rules and task-specific algorithms toward architectures where large AI models become central components, even in layers traditionally handled by “classical” algorithms, such as decision-making.

In this context, Alpamayo plays a strategic role. For the first time, NVIDIA is offering its own foundation model for planning and decision-making, effectively re-centering the DRIVE platform around an end-to-end AI-driven architecture — from cloud training to execution on the in-vehicle computer.

The Vehicle’s Tactical Brain

Alpamayo is a large multimodal Vision-Language-Action (VLA) model that ingests data from multiple video cameras, LiDAR and radar sensors, as well as vehicle state information, and converts it into an internal representation that enables reasoning and action planning. Based on this, the model generates a future driving trajectory several seconds ahead. It does not directly control actuators such as steering or braking, but it determines the vehicle’s tactical behavior.

Unlike general-purpose language models, Alpamayo operates in a physical environment and combines perception with spatial and contextual reasoning. Its inputs include video sequences, motion data, and in some cases maps and navigation goals. The model performs scene understanding, risk assessment, and path planning as part of a single decision chain. Its primary output is a continuous trajectory passed to the vehicle’s classical control layer, which handles physical actuation and safety constraints.

Training such a model relies on a combination of real-world data and massive amounts of synthetic data generated using NVIDIA’s simulation platforms, Omniverse and DRIVE Sim.

The model is released as open source, including weights and training code, allowing automakers and Tier-1 suppliers to retrain it on their own data, adapt it to their system architectures, and integrate it into existing stacks — not as a closed product, but as a foundation for internal development. NVIDIA has also announced partnerships with industry players including Lucid Motors, Jaguar Land Rover (JLR), Uber, and research collaborations such as Berkeley DeepDrive to explore advanced autonomous driving technologies using Alpamayo.

Mobileye: A Challenge to the Full-Stack Model

An autonomous driving stack typically consists of several layers: sensors, perception, planning and decision-making, and control. Alpamayo sits squarely in the planning layer. It does not replace perception, nor does it replace safety-critical control systems — but it does replace, or at least challenge, the traditional algorithmic decision-making layer.

This enables a more modular system design: perception from one supplier, planning from NVIDIA’s model, and control from another Tier-1. This represents a conceptual shift away from closed, end-to-end “black box” solutions.

That is where the tension with Mobileye emerges. For years, Mobileye has offered a nearly complete stack — sensors, perception, mapping, planning, and proprietary EyeQ chips running the entire system with high energy efficiency. This model fits well with ADAS and L2+ systems, and even more advanced autonomous configurations.

However, foundation models for planning shift the balance. They require more flexible and powerful compute than dedicated ADAS chips typically provide, pushing architectures toward GPU-based computing.

While in some scenarios Mobileye perception components can be integrated into broader stacks, most of the company’s advanced autonomy solutions are offered as tightly integrated system units, which in practice limits the ability to swap out individual layers. Moreover, the very presence of an open planning model weakens the value proposition of proprietary planning software. Instead of developing or licensing dedicated planning algorithms, automakers can adapt an existing foundation model to their own data and operational requirements.

This is not an immediate threat to Mobileye’s core business, but over the longer term — as the market moves toward L3 and L4 autonomy and the decision layer becomes increasingly AI-driven — it represents a genuine strategic challenge to the closed, end-to-end model.

That said, Mobileye retains a significant structural advantage: it delivers a complete system and assumes full responsibility for safety and regulatory compliance. For many automakers, especially those without deep in-house AI and software capabilities, this is critical. They prefer a single supplier accountable for system performance rather than assembling and maintaining a complex “puzzle” of components from multiple vendors, with fragmented liability and higher regulatory risk.

Innoviz and Arbe: Sensors Gain Strategic Importance

For Israeli sensor suppliers such as Innoviz and Arbe, NVIDIA’s move could be distinctly positive. Advanced planning models benefit from rich, reliable, multi-sensor input. LiDAR provides precise three-dimensional geometry and depth, while advanced radar excels at detecting objects in poor lighting and adverse weather conditions.

This sensor data is essential for planning layers and decision-making models operating in dynamic physical environments. As a result, both companies are positioning themselves as part of NVIDIA’s ecosystem rather than alternatives to it. Both have demonstrated integration of their sensing and perception pipelines with NVIDIA’s DRIVE AGX Orin computing platform.

In a stack where decision-making becomes more computationally intensive and AI-driven, the value of high-quality sensing only increases. No matter how advanced the model, limited input inevitably leads to limited decisions.

Cognata and Foretellix: Who Verifies AI Safety?

Another layer gaining importance is simulation, verification and validation — where Israeli firms Cognata and Foretellix operate.

Cognata focuses on building synthetic worlds and complex driving scenarios for training and testing, while Foretellix provides verification and validation tools that measure scenario coverage, detect behavioral gaps, and generate quantitative safety metrics for regulators and safety engineers.

As AI models become central to driving stacks, the need for scenario-based safety validation grows, beyond simply accumulating road miles.

Both companies are aligned with NVIDIA’s simulation-centric development approach. Cognata integrates with DRIVE simulation and Hardware-in-the-Loop environments (where real vehicle hardware is connected to virtual scenarios) for large-scale testing, while Foretellix connects its validation tools to Omniverse and DRIVE to assess AI-based driving systems under diverse physical conditions.

Open Source, Semi-Closed Platform

Although Alpamayo is released as open source, it is deeply optimized for NVIDIA’s hardware platforms. Optimization for CUDA, TensorRT, and low-precision compute enables real-time execution on DRIVE computers, which are architecturally closer to GPUs than to traditional ADAS chips.

This fits into NVIDIA’s broader open-model strategy: the company releases open models for robotics, climate science, healthcare and automotive — but after deep optimization for its own computing platforms. The approach enables broad ecosystem adoption while preserving a performance advantage for those building on NVIDIA hardware.

In practice, this allows NVIDIA to expand AI into physical industries while shaping the computing infrastructure those industries will rely on.

A Threat to One Model, an Opportunity for Others

NVIDIA’s driving model does not herald an immediate transformation on public roads, but it does signal a deeper shift in how the automotive industry approaches autonomy: fewer hand-crafted rules, more general AI models, more in-vehicle compute, and heavier reliance on simulation and validation.

For much of the Israeli auto-tech sector — sensor providers, simulation vendors and validation specialists — this trajectory aligns well with existing products and strategies, and could accelerate adoption and partnerships within the DRIVE ecosystem. For Mobileye, by contrast, it signals the emergence of an alternative path to building the “driving brain” — one that does not necessarily rely on a closed, vertically integrated stack.

If autonomous driving once appeared destined to be dominated by a small number of players controlling entire systems, NVIDIA’s move points toward a more modular future — with different layers supplied by different vendors around a central AI platform. At least in the Israeli auto-tech landscape, many players appear well positioned for that scenario.

Exclusive: Cognata’s new strategy

Cognata Company from Rehovot, Israel, has announced few weeks ago the appointment of Dr. Gahl Berkooz as Chief Data Officer and President, Americas. This is a new function in the company, and staffing it with a senior, experienced figure reveals new Cognata’s strategy to move from supplying a simulator towards complete procedural solution intended for both development and validation of ADAS and autonomous driving systems. The goal of this platform is to cover all the phases – starting at creating and managing data and ending at executing verification processes that are obliged throughout product lifecycle.  

Nowadays, smart vehicle is conceptualized as a computer on wheels, an instrument that produces enormous amounts of data that needs to be managed and utilized in the form of analytics and monetization. However, when Berkooz joined Ford in 2004, the field of connected vehicle was in  its infancy, and the interface between cars, data and computers was far less natural. In a  conversation with Techtime, Berkooz, who holds a Ph.D. in Applied Mathematics from Cornell  University, says: “The data area among major vehicle manufacturers was a mess. As the connected car field evolved, it was clear that we need a new approach regarding data management and the  way we can utilize it”. 

During his time at Ford, Berkooz was in charge of establishing the Information Management and Analytics at the OEM, both organizational data and data that is produced and consumed by drivers.  He was the one who formulated the way data is collected and standartized to produce analytics and monetization. Later, he moved to General Motors as Chief of Analytics for General Motors’ Global  Connected Customer Experience (OnStar) Division, where he led similar processes between 2016- 2018. 

Berkooz arrived at Cognata through his third career’s milestone, German Tier-1 ZF, where he established the ZF Digital Venture Accelerator, building technology start-ups for ZF. Cognata and ZF  are collaborating for several years. “I was introduced to Cognata through ADAS development startup who worked in collaboration with Cognata. This cooperation emphasized the need of reducing ADAS verification costs”, said Berkooz. 

The way to autonomy is paved with endless milage 

At the beginning of the technological journey towards autonomy, AV developers based their testing mainly on test drives, intended to train the systems and verify their reliability in recognizing the environment and decisions making. However, car industry quickly realized that these road tests have limited efficiency.  

Berkooz: “Road tests are an expensive operation, and it is hard to ‘catch’ rare scenarios. Car industry is trying to form the most efficient and proper way of validating ADAS. As the level of autonomy is higher, the range of validation is increased, and in a non-linear manner, since the more the vehicle is responsible for more driving aspects, more scenarios should be evaluated, and the coverage must be greater accordingly”. 

As of today, the focus in ADAS development and verification is moving from road tests to simulators. Cognata’s simulator creates virtual environment that imitates the road in detail, starting at the exact  street mapping, drivers and cars behavior and ending at small, unexpected items such as road flaws, trash cans, signs, trees and even a cat suddenly running into the road. Cognata’s simulator is capable  of systematically producing driving scenarios’ clusters, which evaluate the functionality of sensors and computing units at every situation they may encounter in the jungle called “the road”. 

From simulation company to data company 

However, although using a simulator significantly accelerates the development and test processes, one can not based the verification of a safety system solely om simulator, since simulation is  eventually only an approximation of reality.  

Dr. Gahl Berkooz

According to Berkooz, Cognata is now formulating a strategy where the simulator is just another  instrument in a complete ecosystem of processes and solutions for developing and validating ADAS. “The simulator  is not the center, the data are. Eventually, the simulator is an instrument for generating data to be  used by development, training and validating processes. Cognata is striving to position itself as a  data company, whether it is data generated by simulator in virtual environment or actual data  collected by sensors and road tests. Our algorithms provide us with the capability of taking road test’s data and alter parameters such as sight angle. We take the data and make it meta-data that generates additional data”.

Berkooz explains the validation processes are currently decentralized, and there is a need for a  platform to concentrate all the processes, the same way it’s done in the PLM plaforms. “We are moving towards focusing on developing data tools and assets. OEMs generate a lot of data during road tests, but they have no methodology that enable them to make use of this data as part of their future development efforts. The goal is to provide a unified platform that supports data from simulations as well as from actual sensors. This will help reducing verification and road tests costs. This synergy opens a whole new world of possibilities.” 

Seoul Robotics to employ Cognata LiDAR Simulator

Above: Winter driving in difficult visibility conditions – in Cognata’s synthetic simulator

Cognata was chosen to provide a simulator of LiDAR sensors signals to Seoul Robotics, which develops software for analyzing data coming from the sensors, in order to extract information about the vehicle’s environment. The collaboration deepens Cognata’s grip on the ADAS systems market. Founded in 2016 by the CEO Danny Atsmon, the Rehovot-based (near Tel aviv) Cognata has developed a virtual platform used to train and test autonomous vehicles even before the vehicle hits the road for field tests.

The system is based on several layers: a static environment, a dynamic environment, sensors and cloud interface. The static environment is built from realistic imaging of entire cities, including streets, trees, road defects, etc. The dynamic layer mimics the behavior of other drivers on the road and the sensor layer mimics the information coming from each of the 40 different sensors found today in autonomous vehicles.

The chosen imaging software of Innoviz

Cognata is well acquainted with the field of LiDAR. In December 2019, it was selected by Innoviz to test Innoviz’s LiDAR technology. Cognata’s software can simulates how Innoviz’s LiDAR signals are reflected from different surfaces and materials, and how the sensors will function under different road conditions. A few days later it was also chosen by the Rehovot-based Foresight to test its QuadSight system, based on the use of two infrared cameras and two visible-light cameras, to produce a stereoscopic (three-dimensional) machine vision capability.

The agreement with Seoul Robotics is Cognata’s second major deal in Korea. In August 2020, it was selected by Hyundai MOBIS to supply a simulator for the development of ADAS systems and autonomous vehicles. Hyundai MOBIS is a Tier 1 supplier of the Korean automotive industry and manufactures auto parts for Hyundai, Kia and Genesis Motors.

Cognata Raised $18.5M for Autonomous Vehicles Simulation

Cognata has closed an $18.5 million funding round led by Scale Venture Partners with the participation of existing investors, Emerge, Maniv Mobility, and Airbus Ventures and also the addition of the Japanese Global IoT Technology Ventures. The company will use the funding to grow its engineering group and to start commercial operations in the United States, Europe and Asia.

Cognata developed an automotive simulation platform combines artificial intelligence, deep learning, and computer vision to provide a realistic virtual environment that accurately simulates real-world test driving. The method allows to validate autonomous vehicle safety via simulation, to accelerate the development by allowing the testing of a broad range of scenarios in a safe and controlled environment.

Driving a virtual car in a virtual street in a virtual city....
Driving a virtual car in a virtual street in a virtual city….

Cognata’s virtual reality simulator and engine enable autonomous car manufacturers to run thousands of different scenarios based on various geographic locations and driver behaviors, and sharing the road with other users. The company was founded in 2016 by a team of experts in deep learning, autonomous vehicles and computer vision, led by Danny Atsmon, the CEO and Founder (photo above).

Recently it announced that Autonomous Intelligent Driving GmbH (AID), a wholly-owned subsidiary of AUDI AG, selected Cognata as its autonomous vehicle simulation partner. “Simulation is critical to driving autonomous vehicle technology forward,” said Danny Atsmon.

Cognata uses patented computer vision and deep learning algorithms to automatically generate a whole city-simulator including buildings, roads , lane marks, traffic signs and even trees and bushes. According to Atsmon, “You need 10B miles, or hundreds of years of active driving, in order to bring an autonomous vehicle to the human level. We can shave off years from the time and budget required to bring an autonomous vehicle to the market.”