Rivian Unveils Autonomous Driving Platform, Signals Shift Into an AI Company

[Pictured above: Rivian R1S SUV. Source: Wikipedia]

At its Autonomy & AI Day event held last week, electric vehicle manufacturer Rivian unveiled its technological roadmap toward advanced autonomous driving and, for the first time, revealed the core of its next-generation autonomy platform. The announcements centered on a proprietary compute chip developed in-house, a new software platform based on a large driving model, and deep integration of artificial intelligence across all layers of the system.

Rivian is a U.S. electric vehicle manufacturer focused on premium electric SUVs and pickup trucks, best known for its R1T and R1S models. In 2024, the company delivered approximately 50,000 vehicles. According to the roadmap presented, the new technologies will be rolled out gradually across Rivian’s next-generation vehicles—led by the upcoming R2 platform, expected to reach the market starting in 2026. Initial deployments will offer advanced semi-autonomous capabilities, paving the way toward Rivian’s longer-term goal of Level 4 autonomous driving. Taken together, the announcements position Rivian not merely as an EV manufacturer, but as a company architecting the future of autonomy through a vertically integrated AI system—more akin to cloud and deep-tech companies than to traditional automakers.

A Custom Chip as the Foundation of Autonomy

At the heart of Rivian’s autonomous driving platform is a custom inference chip, designed specifically to run real-time autonomous driving models after several years of internal development. The chip delivers roughly 1,600 TOPS in sparse mode—an efficiency-oriented compute approach that skips unnecessary operations—and is optimized for continuous, low-latency processing of multi-sensor data streams while maintaining controlled power consumption. According to Rivian, the chip is a central pillar of its next-generation autonomy platform and is expected to replace the NVIDIA-based compute solutions used in earlier generations of the company’s driver-assistance systems.

With this move, Rivian joins a very small group of automakers that have chosen to develop their own dedicated processors for autonomous driving—a path previously taken primarily by Tesla. The decision reflects a strategic view of AI hardware as a long-term competitive asset rather than an off-the-shelf component.

Sitting above the hardware layer is a software platform built around large AI models for driving. Rivian describes a unified model that combines visual perception, scene understanding, and decision-making, replacing the traditional pipeline of separate functional modules. This marks a shift from a conventional ADAS architecture to a foundation-model approach to driving—one that learns from massive datasets, improves continuously, and is better equipped to handle edge cases. Training and refinement of the model are driven by data collected from Rivian’s vehicle fleet and fed back into the system through ongoing software updates.

LiDAR and the Sensor Stack Debate

Central to Rivian’s strategy is a clear commitment to a multi-modal sensing architecture, combining cameras, radar, and the future integration of LiDAR sensors. This choice represents one of the company’s most significant points of differentiation from Tesla. While Tesla adheres to a camera-only approach and has entirely abandoned LiDAR for ideological and cost reasons, Rivian has opted for a more pragmatic path—viewing LiDAR as a critical enrichment layer for three-dimensional spatial understanding.

From Rivian’s perspective, LiDAR is not a replacement for vision-based perception, but a complementary system that adds redundancy, improves object-detection accuracy, and is particularly valuable in challenging lighting conditions and edge-case scenarios. The decision also sends a clear signal to the sensor ecosystem that LiDAR still has a meaningful role to play in autonomous driving, especially when deeply integrated into a unified AI stack. For companies such as Israel-based Innoviz, Luminar, and others, this represents significant potential—not only because Rivian could emerge as an important customer, but because its architectural choice reinforces LiDAR’s position as a core component of automotive sensing systems.

Autonomy as Rivian’s Core Value Engine

Beyond the specific technologies, Rivian’s announcements reflect a broader strategic shift in which autonomy and AI are positioned as the company’s future value engine. Autonomous driving is not framed as another vehicle feature, but as part of a comprehensive AI system built on vertical control over chips, software, and models—designed to turn these capabilities into internal strategic assets rather than dependencies on external suppliers.

In parallel, Rivian emphasized that it is embedding AI across its broader operations, from data collection and fleet analytics to optimization of development and manufacturing processes, as well as maintenance, logistics, and customer service. This mindset brings Rivian closer to the operating model of cloud and AI companies than to that of traditional automakers, aligning it philosophically with technology-driven players such as Tesla and Apple.

The broader implications for the industry are clear. The battle for autonomy is no longer about which sensor is superior, but about who controls the entire AI loop—from data to models to compute. Custom chips are becoming strategic tools, large driving models are replacing modular architectures, and autonomy is increasingly viewed as an evolving platform rather than a one-time promise. Rivian may still be some distance from fully realizing Level 4 autonomy, but the direction it has outlined offers a glimpse of what the industry is likely to look like in the years ahead—one in which the vehicle is, above all, an AI system on wheels.

Fortellix and Voxel51 Partner on Advanced 3D Reconstruction for Autonomous-Driving Training

Israeli company Fortellix and U.S.-based Voxel51 have announced a new partnership aimed at accelerating the training and verification of autonomous-vehicle (AV) systems. Together, they are introducing an end-to-end workflow that transforms raw driving logs into editable, AI-generated 3D scenes that can be reconstructed, manipulated and deployed at scale in simulation environments. The collaboration leverages advanced neural-reconstruction techniques and visual-data processing to offer AV developers a powerful new tool for improving training, testing and validation workflows.

At the heart of the joint effort are drive logs—rich datasets collected from autonomous and semi-autonomous vehicles during real-world operation. These logs typically include video, lidar, radar, GPS, IMU measurements, vehicle-system status and annotated objects. For years, such logs have been indispensable for perception models, yet inherently limited: they capture only what actually occurred on the road. Fortellix and Voxel51 aim to convert these raw logs into full 3D reconstructions that can be expanded far beyond reality, enabling the creation of synthetic variations, edge cases and stress scenarios that cannot be consistently captured in physical testing.

In the joint workflow, Fortellix first classifies and analyzes the driving data, identifying coverage gaps relative to the vehicle’s ODD (operational design domain). Voxel51 then processes the raw inputs—cleaning noise, performing consistency checks, cross-sensor alignment and contextual interpretation—to prepare the material for AI reconstruction. Their combined pipeline draws on 3DGS capabilities and advanced rendering technologies to create realistic, editable 3D scenes. Fortellix then re-enters the loop, generating controlled variations of each scenario, modifying environmental elements, injecting external events and producing synthetic sensor data that mimics real-world output. The final stage is carried out in Voxel51’s visualization and analytics platform, ensuring that the mixed real-and-synthetic dataset meets rigorous quality standards for model training.

Voxel51, a major player in computer vision, is headquartered in Michigan and specializes in large-scale visual-data management. Its flagship product, FiftyOne, allows teams to deeply inspect sensor datasets, uncover labeling errors, assess data quality and detect hidden patterns. Combined with Fortellix’s expertise in AV scenario simulation, the partnership creates a seamless technological chain—from recorded reality to layered, simulation-ready virtual environments.

The implications are significant. Instead of relying solely on costly, months-long field testing that often misses rare events, AV developers can now synthesize edge cases on demand, recreate unusual incidents, tweak environmental parameters—lighting, weather, traffic—and slow down or accelerate events for analysis. For teams working on perception, planning and prediction algorithms, this represents a paradigm shift: hundreds of scenario variations can be generated from a single logged moment, exposing model weaknesses and enabling rapid iteration without sending another vehicle onto the road.

Beyond product development, the collaboration may influence how the AV industry approaches regulatory validation. Authorities increasingly require evidence of ODD coverage, robustness against edge cases and consistent behavior in complex scenarios. If real-world scenes can be faithfully reconstructed and expanded with controlled synthetic variations, validation could become more systematic, transparent and comprehensive.

Ultimately, the Fortellix–Voxel51 partnership reflects a sweeping industry trend: the shift from relying solely on raw real-world data to a blended model where rich virtual reality complements physical driving. Instead of learning only from what has happened, AV systems can now be tested against what could happen. For autonomous-driving developers, this promises higher safety, improved robustness and shorter development cycles—bringing the industry closer to vehicles that can reliably navigate the full complexity of the real world.

Innoviz to Ramp Up Production Tenfold

photo above: Innoviz CEO, Omer Keilaf

The provider of automotive LiDAR sensors, Innoviz Technologies (Rosh HaAyin, Israel) is gearing up towards the global launching robo-taxi autonomous driving services, planned to begin in 2026. The company has decided to increase its mass production volume of LiDAR systems tenfold at the facilities of its manufacturing partner, Fabrinet. The initial serial production of the InnovizTwo systems at Fabrinet started last month.

This was revealed by Innoviz CEO, Omer Keilaf, during the earnings call this week, following the release of Q2 2025 financial results. Keilaf also shared that the Innoviz is currently developing its fourth sensor, the InnovizThree, which will be unveiled to customers later this month at the ITS exhibition in Atlanta. “InnovizThree represent a significant leap forward in performance, cost, and size,” he said.

The is in the beginning of  a new phase: sales grew by approximately 46% compared to the same quarter 2024, to a total of $9.7 million. H1 2025 sales have already surpassed the total sales of 2024, amounting to approximately $27.5 million. Accordingly, Innoviz has updated its 2025 sales forecast to $50-60 million, double its 2024 sales.

A New Funding Program

Despite having $79.4 million in cash and cash equivalents, the company has announced a $75 million ATM (At-The-Market) Program. This mechanism allows Innoviz to sell new or existing shares as needed to raise capital for temporary requirements. Maybe this is reason for the approximately 12.6% drop in the company’s stock on the Nasdaq, bringing its market capitalization to $316 million.

According to Keilaf, Innoviz has signed a Statement of Development Work (SODW) agreement with one of the world’s five largest private car manufacturers. This agreement involves making several adjustments to the InnovizTwo LiDAR sensor in order to adapt it to the manufacturer’s specific requirements. “We have passed RFI and RFQ stages. This is a process we are familiar with, having made specific adjustments to the sensors of Volkswagen’s ID.Buzz autonomous taxi program.”

The Surprising Advantages of Robots

The ID.Buzz autonomous driving system is based on Mobileye’s solution, that includes 9 Innoviz sensors in each taxi. Keilaf also mentioned that the company has entered negotiations with other car manufacturers whose systems are based on Mobileye.

An interesting new market for Innovis is in the industrial and security applications market (outside of the automotive industry). Keilaf: “The majority of our effort, 95%, is dedicated to the automotive sector, where the leading company holds a large portion of the market. However, we recently discovered that in robotics and industrial markets, there is a need for LiDAR sensors that meet functional safety requirements.

“Because we come from the automotive market, we meet these conditions, unlike competing LiDAR manufacturers who did not come from automotive.” The company’s new sensor, InnovizSMART, is based on automotive technologies and is intended for use in industrial and robotics systems. It provides detection up to 450 meters even in challenging visibility and weather conditions.

InnovizSMART LiDAR sensor
InnovizSMART LiDAR sensor

The “Gold Rush” Returns

Keilaf: We recently completed its integration within the Nvidia Jetson Orin environment and ecosystem. This means it is included in the reference designs that Nvidia provides to Orin customers. One reason for this move is that the major integrators in the industrial and robotics market frequently use Jetson.” For Innoviz, this new market offers several additional advantages: “The timelines here are much shorter, and the profit margins are very high compared to the automotive industry.”

Meanwhile, a new trend is emerging in the AV market: “It is returning to the ‘Gold Rush’ phase. Manufacturers are now working on new car models and adopting Level 3 (L3) autonomous systems. We are seeing a recovery in the market for higher-level autonomy systems, L4, but here the main push comes mainly from the commercial market: autonomous taxis, autonomous trucks, and autonomous commercial transport vehicles.”

Exclusive: Cognata’s new strategy

Cognata Company from Rehovot, Israel, has announced few weeks ago the appointment of Dr. Gahl Berkooz as Chief Data Officer and President, Americas. This is a new function in the company, and staffing it with a senior, experienced figure reveals new Cognata’s strategy to move from supplying a simulator towards complete procedural solution intended for both development and validation of ADAS and autonomous driving systems. The goal of this platform is to cover all the phases – starting at creating and managing data and ending at executing verification processes that are obliged throughout product lifecycle.  

Nowadays, smart vehicle is conceptualized as a computer on wheels, an instrument that produces enormous amounts of data that needs to be managed and utilized in the form of analytics and monetization. However, when Berkooz joined Ford in 2004, the field of connected vehicle was in  its infancy, and the interface between cars, data and computers was far less natural. In a  conversation with Techtime, Berkooz, who holds a Ph.D. in Applied Mathematics from Cornell  University, says: “The data area among major vehicle manufacturers was a mess. As the connected car field evolved, it was clear that we need a new approach regarding data management and the  way we can utilize it”. 

During his time at Ford, Berkooz was in charge of establishing the Information Management and Analytics at the OEM, both organizational data and data that is produced and consumed by drivers.  He was the one who formulated the way data is collected and standartized to produce analytics and monetization. Later, he moved to General Motors as Chief of Analytics for General Motors’ Global  Connected Customer Experience (OnStar) Division, where he led similar processes between 2016- 2018. 

Berkooz arrived at Cognata through his third career’s milestone, German Tier-1 ZF, where he established the ZF Digital Venture Accelerator, building technology start-ups for ZF. Cognata and ZF  are collaborating for several years. “I was introduced to Cognata through ADAS development startup who worked in collaboration with Cognata. This cooperation emphasized the need of reducing ADAS verification costs”, said Berkooz. 

The way to autonomy is paved with endless milage 

At the beginning of the technological journey towards autonomy, AV developers based their testing mainly on test drives, intended to train the systems and verify their reliability in recognizing the environment and decisions making. However, car industry quickly realized that these road tests have limited efficiency.  

Berkooz: “Road tests are an expensive operation, and it is hard to ‘catch’ rare scenarios. Car industry is trying to form the most efficient and proper way of validating ADAS. As the level of autonomy is higher, the range of validation is increased, and in a non-linear manner, since the more the vehicle is responsible for more driving aspects, more scenarios should be evaluated, and the coverage must be greater accordingly”. 

As of today, the focus in ADAS development and verification is moving from road tests to simulators. Cognata’s simulator creates virtual environment that imitates the road in detail, starting at the exact  street mapping, drivers and cars behavior and ending at small, unexpected items such as road flaws, trash cans, signs, trees and even a cat suddenly running into the road. Cognata’s simulator is capable  of systematically producing driving scenarios’ clusters, which evaluate the functionality of sensors and computing units at every situation they may encounter in the jungle called “the road”. 

From simulation company to data company 

However, although using a simulator significantly accelerates the development and test processes, one can not based the verification of a safety system solely om simulator, since simulation is  eventually only an approximation of reality.  

Dr. Gahl Berkooz

According to Berkooz, Cognata is now formulating a strategy where the simulator is just another  instrument in a complete ecosystem of processes and solutions for developing and validating ADAS. “The simulator  is not the center, the data are. Eventually, the simulator is an instrument for generating data to be  used by development, training and validating processes. Cognata is striving to position itself as a  data company, whether it is data generated by simulator in virtual environment or actual data  collected by sensors and road tests. Our algorithms provide us with the capability of taking road test’s data and alter parameters such as sight angle. We take the data and make it meta-data that generates additional data”.

Berkooz explains the validation processes are currently decentralized, and there is a need for a  platform to concentrate all the processes, the same way it’s done in the PLM plaforms. “We are moving towards focusing on developing data tools and assets. OEMs generate a lot of data during road tests, but they have no methodology that enable them to make use of this data as part of their future development efforts. The goal is to provide a unified platform that supports data from simulations as well as from actual sensors. This will help reducing verification and road tests costs. This synergy opens a whole new world of possibilities.” 

IDF choose Coganta’s rough terrain simulator

IDF, through the Department of Production and Procurement in the Ministry Of Defense, purchased Coganta’s simulator, which simulate terrain driving. IDF will use the simulator to validate and train autonomous driving algorithms, as part of the development of autonomous vehicles (AV) and advanced driver assistance systems (ADAS) for military usage.

 Coganta’s AV Off-road simulator is a new platform, intended for training and testing of autonomous vehicles driving in difficult terrain conditions on unpaved roads, to include military vehicles such as unmanned tools and remotely operated vehicles (ROV). The system simulates many scenarios, to include off-road driving on unpaved roads, narrow trails, steep slopes, muddy or sandy ground, as well as obstacles along the path such as rocks or vegetation, and driving in poor visibility conditions like darkness or limited view angles.   

Simulating terrain driving features complex challenges. Unlike public road, where the driving route is clear and regulated, in maneuvering in rough terrain the AV need to consistently estimate the possible route without rolling over or encountering an impassable obstacle. Shay Rootman, Director of Business Development at Coganta, explains to Techtime that the main challenge is simulating the physic of the rough terrain driving: “In the field, there are no predefined driving outlines. One of the major physical aspects that have to be taken into account in rough terrain driving is the friction generated between ground and road conditions and between the autonomous vehicle – whether it is muddy, sandy or bumpy ground. The vehicles have to get pretty good estimation of the road conditions in order to adjust the speed and the angle of its approach and whether the obstacle in front is passable”.

In recent years, using unmanned military vehicles became more and more common in military forces around the world. It is mainly used for reconnaissance missions, mine clearance and lanes opening in situations where human’s presence might be dangerous. For example, the US Army develops autonomous transport trucks that can move independently in a convoy.

In addition, Israeli defense companies are also investing in developing autonomous vehicles in the recent years. The Israel Aerospace Industries (IAI) develops a variety of combat instruments such as autonomous robotic patrol for detecting and evacuating Improvised explosive devices (IEDs), and autonomous bulldozers for carrying out complex Combat Engineering missions in threatened areas. Elbit also started in recent years to develop unmanned vehicles to be used in routine security missions. In 2016, together with the IDF, the company has developed the “Border Keeper”, spanned along the Gaza Strip border and the border with Egypt.

“The military autonomous robotics area is boosting. When we started to work with the IDF and the Ministry of Defense, we noticed that simulators are required in AV world in the same way it required in the civil AV world”, says Rootman.

Coganta’s flagship product is a simulator system for training and testing autonomous vehicles. The simulator generates realistic imaging of complete cities – to include streets, trees, road obstacles, cars, human beings and more. It also generates information derived from various sensors such as cameras, infra-red systems and LiDAR. The system allows for generation of multiple scenarios and using it shortens the R&D and verification processes schedules and reduce the number of test drives. In recent years Coganta developed simulators for different types of AV’s such as agricultural tools, mining logistic tools and vehicles intended for rough terrain transportation.

Adasky to establish in-house manufacturing facility

The israeli start-up Adasky announced that it has secured a $15M investment from existing shareholders, the Japanese Kyocera and Sungwoo-Hitech from South-Korea, as part of a series B investment round. Adasky stated that the funds will support the commercialization of the thermal sensor it has developed for the automotive industry and other applications. Adasky intends to establish an in-house end-to-end manufacturing and assemnly line.

Adasky’s first product, Viper, is comprised of a high-performing thermal camera and state-of-the-art machine vision algorithms, together in one complete solution, that can be added to any autonomous vehicle to enable it to see better and analyze its surroundings. Viper passively collects FIR signals through detection of thermal energy radiated from objects and their body heat. AdaSky’s algorithms process the signals collected by the camera to provide accurate object detection and scene analysis, giving the vehicle the ability to precisely detect pedestrians at a few hundreds of meters, allowing more distance in which to react to driving decisions.

Viper is the first high-resolution, thermal camera for autonomous vehicles with minimal size, weight and power consumption and no moving parts – at a price suited for mass market. Viper generates a new layer of information, originated from a different band of the electromagnetic spectrum, significantly increasing performance for classification, identification, and detection of objects and of vehicle surroundings, both near and far range.

AV, Covid-19 detection and Smart City

Based on its core technology, Adasky has released three product lines: thermal sensor for ADAS and AV systems, a customized system recently developed to monitor body temprature of passersby in crowded spaces, to detect persons potentially infected with Covid-19, and a thermal system for smart city applications. In March, Adasky announced on a first agreement with an EV manufacturer to integrate its sensor in a Level 4 AV car model.