Israeli Air Force Deploys In3D XR Simulator to Train Firefighting Teams

Israeli company In3D has announced a new collaboration with the Israeli Air Force to deploy an advanced simulation system based on extended reality (XR) technologies for training firefighting teams at air force bases. The system is designed to simulate complex fire scenarios inside aircraft hangars—sensitive operational environments that contain aircraft, fuel systems, and technical equipment—allowing crews to practice emergency response in a safe yet realistic setting.

The simulator developed by the company uses detailed 3D modeling of hangar environments to create an interactive training experience that includes simulated flames, smoke, and noise, alongside dynamic scenarios that replicate different emergency situations. Trainees enter the virtual environment using VR or mixed-reality headsets and perform firefighting and response procedures in real time, while instructors monitor their actions on an external display and analyze the decision-making process.

The system enables firefighting teams to train for a wide range of complex fire scenarios inside hangars without disrupting operational activity or conducting drills involving live fire. In the past, such training often required clearing an entire hangar and sometimes igniting controlled fires for practice—an approach that involved safety risks and logistical challenges. The new simulator allows multiple scenarios to be run within a virtual environment that replicates real-world conditions, including extreme situations that are difficult or impossible to reproduce in physical drills.

“Virtual reality enables teams to prepare for extreme situations in a highly realistic way while maintaining a controlled level of pressure,” said Nathanael Reicher, CEO of In3D. “This allows organizations to conduct high-quality training without disrupting operations or exposing people and equipment to unnecessary risk.”

A Startup Born from Operational Training Needs

In3D was founded in 2017 by Nathanael Reicher and Ran Chaikin, both veterans of Israel’s defense establishment who served in command and training roles during their military careers. During their service, the two identified a gap between traditional military training methods and the advanced technologies emerging in the gaming and virtual reality industries.

They launched the company to develop simulation systems based on virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies designed to replicate real operational environments as accurately as possible—whether on a battlefield, in an operating room, or in an industrial facility.

The company’s technology is built around creating detailed 3D models of real-world environments, sometimes using field scans and measurements, and transforming them into interactive simulations where different scenarios can be executed. Users enter the environment using XR headsets, move within the space, and perform actions in real time, while the system records and analyzes their behavior for training and feedback purposes.


Expanding Activity in Defense and Healthcare

Beyond its work with the Israeli Air Force, In3D has developed additional projects for defense and training organizations. The company has created simulation systems designed to rehearse emergency scenarios and disaster sites, allowing professionals to train under conditions that closely resemble real-world situations.

For example, In3D developed a mixed-reality simulation system for Israel’s Military Rabbinate, used to train battalion rabbis for complex battlefield and disaster scenarios. The system enables personnel to practice procedures such as casualty identification and evacuation under operational pressure.

The company’s technology has also found applications in healthcare. In3D has developed systems that convert CT and MRI scans into interactive 3D models, allowing surgeons to enter a virtual environment, examine patient anatomy from different angles, and plan surgical procedures in advance.

In addition, the company operates in areas such as professional training, industrial maintenance, and engineering visualization, often using digital twin principles—creating virtual replicas of physical environments that enable simulations and scenario analysis.

Training Closer to Reality

The new project with the Israeli Air Force reflects a broader shift in defense and industrial training, where extended-reality technologies are increasingly used to prepare personnel for extreme situations. By combining 3D simulation, real-time interaction, and instructor oversight, such systems allow organizations to conduct complex training exercises more frequently and at lower cost.

For Israeli Air Force firefighting teams, the result is the ability to repeatedly train for hangar fire scenarios—among the most hazardous environments at air bases—and improve operational readiness for real emergencies.

Model Poisoning and AI Manipulation: New Cyber Risks Reach the Automotive Industry

Artificial intelligence is rapidly becoming a core technology in the transportation industry. Automakers and technology companies are integrating AI systems into autonomous driving platforms, advanced driver-assistance systems (ADAS), fleet management, and smart services within connected vehicles. Advanced models are trained on massive volumes of driving and sensor data to improve driving performance, optimize operations, and enable new smart mobility services.

However, alongside these technological benefits, the rise of AI is also creating a new layer of cyber risk.

According to the annual report by Upstream Security, which maps cyber threats across the automotive and smart mobility ecosystem, the accelerated integration of artificial intelligence into transportation systems is introducing a new range of attack vectors. This year marks the eighth edition of the report, which places particular emphasis on emerging risks associated with AI technologies.

The report is based on an analysis of 494 publicly reported cyber incidents recorded in 2025, as part of a database of 2,371 incidents documented since 2010, alongside intelligence gathered from deep- and dark-web sources and monitoring of about 1,996 active threat actors in the field.

Researchers note that “AI is reshaping the cyber landscape for automotive and smart mobility.” According to the report, AI systems are now integrated across multiple layers of the mobility ecosystem—from development environments and cloud services to vehicle applications and customer-facing services—meaning that vulnerabilities in models or surrounding interfaces can become entry points for attackers. The report also warns that AI is not only a target of attacks but increasingly a tool used by attackers themselves. It estimates that roughly 80–90% of cyber operations worldwide can already be executed autonomously by AI, enabling attacks to be carried out at far greater scale and speed than before—a trend likely to affect smart mobility systems as well.

Manipulating Driving Models

One of the key threats highlighted in the report is Prompt Injection attacks, in which an attacker feeds malicious input into an AI system to manipulate the model. According to the report, the risk is particularly relevant for AI systems integrated into cloud services, vehicle applications, and customer service platforms—systems that accept text or voice input from users or other services. In such cases, attackers can craft malicious inputs designed to trick the AI into performing unintended actions, bypassing authorization mechanisms, or exposing sensitive information.

The report also highlights additional risks associated with AI systems, including training data poisoning, sensitive data exposure, and model manipulation. Researchers note that the integration of large language models into mobility services creates new vulnerabilities, writing that “LLMs are being integrated across development, operations, and customer-facing mobility services, introducing new vulnerabilities.” The weakness may arise not only during model deployment but throughout the model lifecycle. In the case of training data poisoning, attackers may attempt to influence the datasets used to train models—either by inserting malicious data into data repositories or through external systems and third-party services that supply data to AI systems. The report also points to risks such as sensitive information leakage through models connected to enterprise systems, manipulation of models through crafted inputs, and attacks designed to disrupt the operation of the model itself. In connected mobility environments—where AI models operate alongside cloud services, APIs, and operational platforms—a vulnerability in one component can become a broader entry point into vehicle and mobility systems.

Another risk stems from the broader ecosystem in which AI systems operate. Many of these systems rely on cloud infrastructure and software interfaces, meaning that a breach in one service could affect others. The report warns that the adoption of AI-powered third-party services “introduces new supply chain risks,” where vulnerabilities in external providers could potentially impact the systems of multiple automakers.

Cyber Attacks on the Rise

These concerns are reinforced by broader trends in cyber threats targeting the mobility sector. According to the report, 494 publicly reported cyber incidents were recorded in the automotive and smart mobility sector in 2025—nearly double the number recorded in 2024—as part of a dataset of more than 2,300 incidents documented since 2010. Ransomware attacks have become a major threat in the industry, accounting for 44% of incidents and often carried out by organized cybercrime groups.

The report also shows that most attacks do not require physical access to vehicles. About 92% of incidents are conducted remotely via the internet or communication networks, and many target the digital infrastructure surrounding vehicles. In practice, the primary attack surface is not necessarily the vehicle itself but the cloud systems and APIs connecting it to external services. Researchers emphasize that “backend servers and APIs remained the dominant exposure point” in connected mobility systems.

The nature of the damage is also evolving. Data breaches have become the most common impact of cyber incidents in the transportation sector, accounting for 68% of cases. Other incidents involve service disruption, system control, or digital fraud.

The report concludes that as artificial intelligence becomes more deeply embedded in mobility systems, the cybersecurity perimeter of the vehicle continues to expand. Protecting connected transportation will therefore require securing the entire ecosystem—from the vehicle itself to cloud infrastructure, software interfaces, and the AI systems increasingly powering modern mobility services.

Main image source: Upstream Security

Israeli Concept, Iranian Drone, American Copy

photo above: US made LUCAS loitering munition

One of the most surprising weapons in the current war in Iran is the U.S. Army’s LUCAS suicide drone, which only entered service in September 2025 and is already influencing the course of the war. This drone is unusual: it is essentially the American copy of the Iranian Shahed-136 drone, which Iran unveiled in 2021 and soon afterward supplied to Russia in large quantities. Today, Russia manufactures it domestically under the name Geran-2.

In June 2025 the United States unveiled the new Low-Cost Uncrewed Combat Attack System (LUCAS). In July 2025 the Secretary of Defense approved accelerated procurement, and by December 2025 the U.S. Central Command (CENTCOM) announced the establishment of a dedicated task force to operate it — Task Force Scorpion Strike (TFSS) — along with the deployment of the first operational squadron in the Middle East. Ten days ago, when the joint Israeli-American attack on Iran began, CENTCOM released a special statement: “For the first time in history, TFSS has deployed one-way attack drones. These low-cost systems were developed based on the Iranian Shahed and are manufactured in the United States.”

A Training Concept That Became a Weapon

How did it happen that the United States copied an Iranian design and began producing it? And is the design Iranian at all — or Israeli? Let us begin with the United States. Behind these inexpensive attack drones stands SpektreWorks, a company based in Phoenix, Arizona. The Pentagon asked the company to develop a training platform that would allow air-defense crews to practice intercepting attack drones. To assist the development process, the company was provided with an Iranian Shahed drone captured in the Middle East.

SpektreWorks conducted a comprehensive examination of the Iranian design. To provide an authentic training experience, the company performed reverse engineering and produced an American version of the Iranian drone. Not coincidentally, it named the system FLM-136, hinting at its Iranian origin (Shahed-136).

From right to left: SpektreWorks FLM-136 and Shahed-136
From right to left: SpektreWorks FLM-136 and Shahed-136

The platform is small, weighing about 80 kg, and equipped with a small, inexpensive rear internal-combustion engine that gives it a flight range of slightly more than 800 km at a speed of 130–140 km/h. It carries a payload of up to 20 kg and can cover its maximum distance within about six hours of flight.

The model achieved one of its most important goals: an extremely low price. The platform proved so successful that the U.S. Army decided to procure it as a weapon, not merely as a training system. The price played a decisive role in that decision, since it allows the launch of hundreds or even thousands of drones, creating an effect of deception, confusion, and shock — thereby breaking the asymmetry that often characterizes conflicts between sophisticated armies such as the Israel Defense Forces and the U.S. Army and their adversaries.

The Liberty Ship Model

In an interview with a U.S. Army publication, the Director of Experimentation at the Office of the Under Secretary of War for Research and Engineering, Colonel Nicholas Law, explained that the inspiration came from the Liberty Ship production model, which enabled the rapid manufacture of thousands of cargo ships during World War II. “LUCAS will fulfill a similar role in the new era of warfare,” he said. “There is a price point at which we want to produce large numbers of these systems very quickly.” It is not a single manufacturer: the system is designed to move to multiple manufacturers so it can be built in mass quantities.

The combat version differs slightly from the training version. It carries an 18-kg warhead and has a range of 650–800 km. The American version also includes AI-based navigation and mission-management capabilities, enabling operation as part of a swarm. The most important detail is the price: about $35,000 per unit, compared with roughly $20,000 for the Iranian version.

An Idea from Israel Aerospace Industries

Sharp-eyed observers have noticed the strong resemblance between the Iranian Shahed drone and the Israeli Harpy attack drone developed more than 30 years ago by Israel Aerospace Industries (IAI). Like its Iranian counterpart, the Harpy is a loitering munition with a broad delta wing and a small rear internal-combustion engine.

Both are launched using a booster rocket that separates after launch, both fly autonomously according to pre-programmed route data, and both can be launched from trucks and ships. However, the Harpy was more sophisticated and more expensive because its primary mission was detecting electromagnetic emissions in order to destroy radar systems.

IAI Harpy at Paris Air Show 2007. Source: Wikipedia
IAI Harpy at Paris Air Show 2007. Source: Wikipedia

There is no any official confirmation of the widespread assumption within the industry that the Shahed is a low-cost copy of the Harpy. Nevertheless, its history provides several possible points where the Iranians might have obtained the knowledge necessary for some form of reverse engineering. The first is the Chinese deal: in 1994 Israel sold Harpy drones to China in a transaction that caused a crisis with the United States and eventually led to the resignation of Israel’s Defense Ministry Director-General, Amos Yaron.

Under U.S. pressure, the Chinese drones were not upgraded in 2004 as originally planned. Later reports suggested that China conducted reverse engineering on them and developed its own version called ASN-301. The Harpy is used by several militaries, including those of India, Morocco, Turkey, and Azerbaijan (which borders Iran). Azerbaijan used the system in its battles with Armenia, and during those conflicts there were reports of drones that strayed from their course or were shot down.

Close military ties between China and Iran, along with the loss of drones in Azerbaijan, could help explain how design knowledge from the Israeli UAV — which was the first of its kind in the world — might have found its way into the Iranian drone. However, fully copying the Israeli concept does not necessarily require complete reverse engineering: its core characteristics — such as a broad delta wing, slow long-range flight, a small rear pusher engine, and container launch using a booster rocket — can be replicated even without reverse engineering.

Israeli Doctrine vs. Iranian Doctrine

There are also notable differences between the two platforms. The Israeli Harpy uses a Wankel engine produced by Elbit, which is very efficient and extremely quiet but expensive to manufacture. The Iranian Shahed is based on a small scooter like piston engine, which is cheap and noisy. It is so widely available. Similar engines can be purchased worldwide, even through online model-aircraft parts stores. In general, the Shahed relies on cheap, easily obtainable civilian components, while the Israeli drone is built from military grade expensive components.

The largest difference lies in the avionics systems: The Israeli Harpy searches for specific radiation sources and can navigate toward them; if it does not detect a radar signal, it can abort the mission. The Iranian Shahed, by contrast, is equipped with what might be described as “poor man’s avionics”. A set of coordinates is entered into the system, and the drone flies toward them using GPS. The bottom line is that the differences between the two are less about technology and more about combat doctrine. In this sense, the American LUCAS drone has adopted the Iranian operational concept almost in full.

Vay to Integrate Nexar’s AI Accident Prediction Model into Remotely Driven Vehicle Fleet

[Image: Remote driving via a steering-wheel control system. Credit: Vay]

Israeli computer vision company Nexar announced a partnership with mobility startup Vay to integrate its AI-based accident prediction model into Vay’s remotely driven vehicle fleet. The model, called BADAS (Beyond ADAS), is designed to identify road hazards seconds before they occur and provide real-time alerts about dangerous situations.

According to the companies, the integration will add a predictive safety layer to Vay’s remote-driving system — a service considered the world’s first commercial fleet of vehicles driven remotely on public roads.

As part of the collaboration, Nexar’s AI model will analyze video streams and sensor data from cameras installed on Vay’s vehicles in order to identify potential risk situations. The goal is to add predictive capabilities that anticipate danger before the operator reacts, generating alerts for scenarios such as a nearby vehicle drifting out of its lane, a pedestrian stepping into the road, or unusual traffic patterns at an intersection.

A car that arrives without a driver

Vay is a mobility startup founded in Berlin in 2018 that is developing a new model for urban transportation: a car that arrives at the user without a driver — but is not fully autonomous.

Instead, the vehicle is delivered by a human operator located in a remote control center who drives the car using a tele-driving system.

When a user orders a vehicle through the company’s app, a remote operator drives the car to the requested location. Once the vehicle arrives, control is handed over to the user, who drives the car to their destination. At the end of the trip, the remote operator reconnects to the vehicle and either parks it or drives it to the next customer.

The model combines elements of car sharing and ride-hailing but differs from both: there is no driver inside the vehicle, yet the system is not fully autonomous. The concept is designed to offer a flexible and potentially cheaper alternative to taxi services while avoiding the complexity and high costs associated with developing fully autonomous vehicles.

Vay’s system relies on a fleet of electric vehicles equipped with multiple cameras, real-time data connectivity, and safety mechanisms that enable low-latency remote control. The operator sits at a control station that mimics a traditional driving cockpit, complete with a steering wheel, pedals, and screens displaying the vehicle’s surroundings.

The company launched its commercial service in Las Vegas and has raised more than $200 million from investors including Kinnevik, Coatue, and Atomico. Its business model relies on a shared fleet of vehicles, with remote operators responsible for delivering cars to users and managing fleet logistics between trips.

Not a driving model — a crash prediction model

The technology Nexar is integrating into the service is based on its AI model BADAS, designed to predict crashes and road risks before they occur.

The model was trained on a massive dataset of real-world driving data collected over several years from a global network of dashcams installed in private vehicles, commercial fleets, and urban monitoring systems.

Unlike traditional ADAS systems that react to events once they occur, BADAS attempts to identify early behavioral patterns that indicate a potential risk. Nexar’s dataset includes thousands of “near-miss” events — situations in which a dangerous scenario emerged but no actual crash occurred — allowing the model to learn what abnormal behavior on the road looks like.

According to the company, the model was able to predict accidents on average 4.9 seconds before they happened during testing.

Nexar does not aim to build its own autonomous vehicles. Instead, it positions the model as infrastructure that can be integrated into systems developed by automakers, insurance companies, commercial fleets, and smart city platforms.

A complementary technology stack

The collaboration combines two complementary layers of technology: a remote-driving system that allows vehicles to operate without a driver inside the car, and an AI model that analyzes the driving environment and provides early warnings of potential hazards.

In Vay’s system, the remote operator relies on live video feeds from the vehicle’s cameras to understand the road environment. Nexar’s predictive model can add an additional layer of analysis on top of that video stream — effectively acting as an AI-powered assistant that identifies dangerous patterns and alerts the operator before they notice them.

Beyond improving safety, such a system could also increase the operational efficiency of the service. A predictive monitoring layer that continuously analyzes the environment may reduce cognitive load on remote operators and allow the company to scale its fleet operations while maintaining high safety standards.

For Nexar, the partnership demonstrates how its model can serve as a foundational software layer across multiple mobility platforms — not only in private vehicles or ADAS systems, but also in connected fleets and emerging transportation services.

More broadly, the move reflects a growing trend in the transportation industry: the shift from reactive safety systems that detect danger after it appears to predictive systems built on real-world driving data.

If models of this type prove reliable at scale, they could become a core software layer in next-generation transportation safety systems — whether in autonomous vehicles, connected fleets, or remote-driving services.

Elsight Sales Surge 15-Fold; Company Builds Global Defense Sales Network

[Image caption: Elsight’s HALO drone connectivity module]

Following dramatic sales growth in 2025, Ramat Gan–based Elsight has decided to establish a global sales and business development network focused primarily on the defense market for drone and unmanned systems technologies.

As part of this move, the company announced the appointment of five senior executives with operational and defense procurement experience, working with the U.S. Department of War, Israel’s defense establishment, and NATO allied governments.

CEO Yoav Amit said the expansion of the sales organization reflects the scale of new opportunities emerging in the defense market. “Unmanned systems are becoming a central component of modern military operations, which makes reliable connectivity a mission-critical capability.”

A Turnaround After Years of Slow Growth

Elsight develops communications systems for drones, but for many years its sales remained modest. The dramatic turnaround occurred over the past year.

In 2025, sales grew more than 15-fold compared with 2023 and more than 11-fold compared with 2024. Revenue reached approximately $23 million in 2025, compared with $2 million in 2024 and $1.5 million in 2023.

For the first time in its history, the company turned profitable.

The momentum was also reflected in the company’s share performance. Its stock on the Australian Securities Exchange (ASX) has surged roughly 1,500% over the past 12 months, giving the company a market capitalization of about A$1.1 billion (approximately $770 million).

The Insight That Drove the Shift

To understand the turnaround, one must look back at the company’s origins.

Elsight was founded in 2009 by former Israeli military intelligence officers Nir Gabay and Roee Kashi. In its early years, the company developed systems for real-time video and data transmission for homeland security and defense agencies.

Its technology allowed stable video streams to be transmitted from the field by combining multiple communication networks simultaneously, such as several cellular networks, helping overcome connectivity limitations in challenging environments.

In 2017 the company went public on the Australian Securities Exchange (ASX).

Toward the end of the last decade, Elsight identified a strategic opportunity: reliable connectivity for mobile systems was one of the key bottlenecks in operating drones.

Modern drones must transmit real-time video, flight telemetry, and control commands, often across long distances and in areas with unstable cellular coverage.

To address this challenge, Elsight developed HALO, a compact communications module mounted on drones that aggregates multiple communication links — cellular, radio, and sometimes satellite — into a single stable connection.

The module enables continuous transmission of video, telemetry data, and control commands, even if one of the communication networks becomes weak or unavailable.

Demand Surge in the Military Drone Market

Elsight’s move into the drone sector coincided with a surge in demand for military unmanned systems.

As a result, the company began seeing a shift from pilot projects and technology demonstrations to actual orders from drone manufacturers.

In December 2025, Elsight reported the largest contract in its history — a roughly $21 million order from a European defense drone manufacturer.

At the same time, the company signed additional agreements with European unmanned systems manufacturers and was selected to participate in Project G.I., a program run by the Pentagon’s Defense Innovation Unit that evaluates technologies for U.S. military unmanned systems.

Building a Global Sales Infrastructure

Against this backdrop, Elsight’s latest move becomes clearer: the company is positioning itself as a connectivity infrastructure provider for military and industrial drone programs and building a presence in major NATO markets and other strategic regions.

As part of this effort, the company appointed Ryan Garay, a former U.S. Army Green Beret specializing in tactical communications, to lead engagement with the U.S. government and special programs.

Roi Lupo, who served nearly two decades in the Israeli Air Force special forces before moving into defense business development in the United States, was appointed Director of Business Development for North America.

Ron Kislev, former CEO of UAV Tactical Systems and a former senior executive at Elbit Systems, will lead the company’s activities with the UK and NATO countries.

Tobias Willuhn, previously head of ISTAR and electronic warfare programs at Elbit Systems Germany, will lead operations in Germany, the European Union, and NATO markets.

Shay Dvir, formerly involved in business development at defense robotics companies XTEND and Roboteam, will lead the company’s activities in Israel and Southeast Asia.

If in its early years Elsight focused on communications technologies for niche defense applications, the company now aims to make HALO a standard connectivity component in drone platforms.

The recent appointments suggest Elsight expects demand for its technology to grow significantly in the coming years — and intends to position itself to capitalize on that wave.

Innoviz LiDAR Sensors to Be Integrated Into Dataspeed Autonomous Vehicle Platforms

[In the photo: Dataspeed’s autonomous vehicle platform. Credit: Dataspeed]

Innoviz announced the expansion of its partnership with U.S.-based company Dataspeed, under which its InnovizSMART LiDAR sensor will be integrated into Dataspeed’s drive-by-wire platforms used to develop and test autonomous vehicles.

According to the announcement, Innoviz’s sensor will become an integral component of Dataspeed’s vehicle platforms, which enable full computer control over core vehicle systems—including steering, braking, and throttle—and are used by companies and organizations developing autonomous technologies. As part of the collaboration, Dataspeed will offer InnovizSMART sensors as part of the systems it supplies to customers, primarily in North America.

The joint solution targets a range of autonomy applications, including vehicles used in defense, agriculture, mining, industry, and ground robotics—sectors where autonomous vehicles often operate in challenging environments such as farmland, open-pit mines, and desert terrain.

Dataspeed, headquartered in Michigan, specializes in developing vehicle platforms designed for the testing and development of autonomous driving systems. The company provides drive-by-wire systems that allow computers to control vehicles through electronic interfaces, effectively transforming production vehicles into testing platforms for automotive companies, startups, universities, and research labs.

The company’s platforms are used by a wide range of organizations in the autonomy industry to integrate sensors, AI computers, and algorithms for the development of autonomous driving systems. According to Dataspeed, its technology has been deployed in more than 500 vehicles worldwide. Vehicle models converted using its platform include the Ford Fusion, Lincoln MKZ, Chrysler Pacifica, Jeep Grand Cherokee, and Ford Ranger.

Beyond civilian uses, Dataspeed’s platforms are also employed in defense and ground robotics projects, including development programs for government agencies and initiatives linked to the U.S. Army focused on autonomous ground vehicles.

The InnovizSMART sensor that will be integrated into Dataspeed’s platforms is a 3D LiDAR system designed primarily for industrial and autonomous applications outside the passenger-car market. Unlike the company’s flagship sensors—InnovizOne and InnovizTwo—designed for integration into production vehicles by automakers, InnovizSMART was developed specifically for markets such as robotics, agriculture, mining, and defense, where durability and operational flexibility are often more critical than strict automotive certification.

The sensor provides high-resolution, long-range three-dimensional mapping of the environment and is engineered to operate reliably in harsh conditions—including scenarios in which the sensor window may be covered with mud, water, or dust. These capabilities make it particularly suitable for autonomous vehicles operating in challenging environments such as mining sites, agricultural fields, or off-road terrain. The platform also enables Innoviz to address a broader market of autonomy and Physical AI applications beyond the traditional automotive industry.

doubleAI Claims 3.6x Speedup Over NVIDIA’s GPU Code

By Yohai Schweiger

Israeli startup doubleAI, founded by CEO Prof. Amnon Shashua and CTO Prof. Shai Shalev-Shwartz, announced that its AI system, WarpSpeed, has successfully rewritten and re-optimized CUDA kernels in NVIDIA’s cuGraph library — part of the RAPIDS software ecosystem for GPU-accelerated data science — achieving an average 3.6× speed improvement over versions refined by NVIDIA’s CUDA engineers over the past decade.

According to the company, every tested kernel showed some degree of improvement, with more than half delivering over 2× speedups. The optimized code has been published on GitHub, allowing users to deploy the accelerated version without modifying existing application code. The announcement was accompanied by a public post from Shashua on X and a detailed technical blog.

cuGraph is a core component of NVIDIA’s RAPIDS suite and is widely regarded as one of the leading GPU libraries for graph analytics — a critical domain for network analysis, recommendation engines, cybersecurity, bioinformatics, and financial systems. Its kernels were developed over years by engineers specializing in hardware-level performance optimization, where decisions about memory layout, thread scheduling, warp structure, and cache behavior can dramatically affect results.

Unlike conventional application development, GPU performance engineering operates in a deeply contextual decision space with no single “correct” solution — only delicate trade-offs between physical and computational constraints.

Were LLM “Gold Medals” Misleading?

Beyond the engineering achievement itself, Shashua frames the milestone as part of a broader debate over the limits of modern AI — particularly whether large language models, scaled through massive training, can truly tackle deep, complex problems where data is limited, validation is difficult, and reasoning chains are long and context-dependent.

In his post, Shashua notes that AI systems have recently “won gold medals at the IMO” and “outperformed top programmers on CodeForces,” but argues that these victories rely on unusually favorable conditions. He describes what he calls “three hidden crutches: abundant training data, trivial verification, and short reasoning chains.”

“When all three are present,” he writes, “today’s AI excels. Remove even one — and it collapses.”

GPU performance engineering, he argues, is a stress test where none of those conditions hold. “Data is scarce. Correctness is hard to verify. And performance emerges from a long chain of interdependent decisions — memory layout, warp behavior, caching, scheduling, graph structure.” In such environments, there is no synthetic benchmark with a clear answer, but rather a vast and tightly coupled search space where each design choice influences many others.

Shashua further claims that even advanced coding agents struggle in this domain. “Even sophisticated agents like Claude Code, Codex, and Gemini CLI fail dramatically here,” he writes, “often producing incorrect implementations even when provided with cuGraph’s full test suite.” According to him, “scaling alone cannot break this barrier,” and new algorithmic ideas were required to address this level of complexity.

AEI Instead of AGI

Founded in late 2023, doubleAI has raised hundreds of millions of dollars at a valuation reportedly approaching $1 billion. The company focuses on building AI systems tailored to solving particularly complex engineering and scientific problems, where — it claims — expert-level or superhuman performance can be achieved through deep algorithmic search rather than brute-force scaling of language models.

doubleAI positions the current achievement as part of a broader vision it calls Artificial Expert Intelligence (AEI): systems that consistently outperform human experts in narrow but critical domains where expertise is scarce and expensive. Rather than pursuing generalized AGI, the company concentrates on solving deep optimization problems, combining learning from limited data, probabilistic validation methodologies, and agentic search structures that navigate complex decision spaces.

The approach resembles an advanced algorithmic search system more than a conventional one-shot language model — and, if the performance gains hold up under community scrutiny, may signal a shift in how AI tackles some of computing’s most demanding low-level challenges.