Israeli Researcher Uncovers Critical Software Infrastructure Flaws Using AI — on an $80 Budget

Next week, Simcha Kosman, a senior researcher at CyberArk Labs, will present a new study at BlackHat London—one of the world’s most prestigious cybersecurity conferences, where only a small fraction of submissions are accepted. His research demonstrates how artificial intelligence can be leveraged to detect security flaws in widely used software systems at a fraction of the traditional cost and time, effectively rivaling the capabilities of industry giants like Google and OpenAI.

Kosman and his team set out to answer a deceptively simple question: Can AI be used to uncover real vulnerabilities in massive software projects—such as the Linux kernel, Redis and FFmpeg—without huge budgets or large teams? Their findings point to an unequivocal yes. In just two days, and for less than $80 in total compute costs, their tool led to the discovery of dozens of vulnerabilities. Several have already been assigned nine official CVE identifiers across major projects including the Linux Kernel, FFmpeg, Redis, RetroArch, Libretro, Bullet3 and Linenoise.

At the heart of the study is a new open-source tool called Vulnhalla. The system combines CodeQL—GitHub’s industry-standard static analysis engine—with an AI model designed to dramatically reduce noise. On large repositories, CodeQL alone can generate tens of thousands of alerts, the vast majority of them false positives. Vulnhalla tackles this bottleneck directly: it analyzes CodeQL’s findings, extracts relevant code context for each alert, and uses the AI model to determine which findings have genuine exploit potential.

Crucially, the researchers don’t simply ask the model broad questions like “Is this a vulnerability?” Instead, they guide it through a structured sequence of prompts that mirror the reasoning of an experienced security analyst: Where is the buffer defined? What is its size? Does it change? What is the target size? Is there a data flow that could lead to a memory-boundary violation? This step-by-step, logic-driven approach forces the model to perform genuine reasoning rather than relying on superficial pattern recognition. According to the study, this methodology reduces false positives by more than 90% for several vulnerability classes, and in some cases by up to 96%.

The result positions Vulnhalla as a compelling alternative to more advanced proprietary systems such as Google’s Deep Sleep and OpenAI’s Aardvark. It delivers comparable vulnerability-detection performance while remaining fully open, transparent and community-driven. For development and security teams struggling under the weight of soaring alert volumes, this hybrid approach offers a way to focus resources on a far smaller set of findings with real-world impact.

As Kosman notes, the research marks another step toward using AI not just to detect weaknesses faster, but to help close widening security gaps in the software we all rely on every day.

Fortellix and Voxel51 Partner on Advanced 3D Reconstruction for Autonomous-Driving Training

Israeli company Fortellix and U.S.-based Voxel51 have announced a new partnership aimed at accelerating the training and verification of autonomous-vehicle (AV) systems. Together, they are introducing an end-to-end workflow that transforms raw driving logs into editable, AI-generated 3D scenes that can be reconstructed, manipulated and deployed at scale in simulation environments. The collaboration leverages advanced neural-reconstruction techniques and visual-data processing to offer AV developers a powerful new tool for improving training, testing and validation workflows.

At the heart of the joint effort are drive logs—rich datasets collected from autonomous and semi-autonomous vehicles during real-world operation. These logs typically include video, lidar, radar, GPS, IMU measurements, vehicle-system status and annotated objects. For years, such logs have been indispensable for perception models, yet inherently limited: they capture only what actually occurred on the road. Fortellix and Voxel51 aim to convert these raw logs into full 3D reconstructions that can be expanded far beyond reality, enabling the creation of synthetic variations, edge cases and stress scenarios that cannot be consistently captured in physical testing.

In the joint workflow, Fortellix first classifies and analyzes the driving data, identifying coverage gaps relative to the vehicle’s ODD (operational design domain). Voxel51 then processes the raw inputs—cleaning noise, performing consistency checks, cross-sensor alignment and contextual interpretation—to prepare the material for AI reconstruction. Their combined pipeline draws on 3DGS capabilities and advanced rendering technologies to create realistic, editable 3D scenes. Fortellix then re-enters the loop, generating controlled variations of each scenario, modifying environmental elements, injecting external events and producing synthetic sensor data that mimics real-world output. The final stage is carried out in Voxel51’s visualization and analytics platform, ensuring that the mixed real-and-synthetic dataset meets rigorous quality standards for model training.

Voxel51, a major player in computer vision, is headquartered in Michigan and specializes in large-scale visual-data management. Its flagship product, FiftyOne, allows teams to deeply inspect sensor datasets, uncover labeling errors, assess data quality and detect hidden patterns. Combined with Fortellix’s expertise in AV scenario simulation, the partnership creates a seamless technological chain—from recorded reality to layered, simulation-ready virtual environments.

The implications are significant. Instead of relying solely on costly, months-long field testing that often misses rare events, AV developers can now synthesize edge cases on demand, recreate unusual incidents, tweak environmental parameters—lighting, weather, traffic—and slow down or accelerate events for analysis. For teams working on perception, planning and prediction algorithms, this represents a paradigm shift: hundreds of scenario variations can be generated from a single logged moment, exposing model weaknesses and enabling rapid iteration without sending another vehicle onto the road.

Beyond product development, the collaboration may influence how the AV industry approaches regulatory validation. Authorities increasingly require evidence of ODD coverage, robustness against edge cases and consistent behavior in complex scenarios. If real-world scenes can be faithfully reconstructed and expanded with controlled synthetic variations, validation could become more systematic, transparent and comprehensive.

Ultimately, the Fortellix–Voxel51 partnership reflects a sweeping industry trend: the shift from relying solely on raw real-world data to a blended model where rich virtual reality complements physical driving. Instead of learning only from what has happened, AV systems can now be tested against what could happen. For autonomous-driving developers, this promises higher safety, improved robustness and shorter development cycles—bringing the industry closer to vehicles that can reliably navigate the full complexity of the real world.

VisIC Raises $26 Million and Expands Into Data Centers

[Image: VisIC’s CEO alongside one of the company’s high-power GaN switches]

VisIC Technologies, based in Ness Ziona, announced the completion of a $26 million Series B funding round. The round included leading industry investors, among them Hyundai Motor Company and Kia, which joined as strategic backers. Their participation reflects the growing interest among automakers in GaN-based power technologies—an energy-efficient alternative for converters and power systems in electric vehicles. For the carmakers, the investment is a step toward evaluating advanced components for future models, as part of broader efforts to improve performance and reduce energy losses.

The new capital will support the development of VisIC’s next generations of GaN power chips, including devices with higher voltage ratings, as well as scaling manufacturing capacity to meet rising market demand.

While VisIC has long been active in the EV sector, the company reports a surge in interest from additional markets—chief among them data centers. The rapid growth of AI workloads and cloud infrastructure has dramatically increased power consumption, driving demand for more efficient power systems. According to the company, high-power-density GaN devices are well-suited for converters and power supplies used in data-center environments, positioning its technology as a strong contender across a broader set of industrial applications.

VisIC’s D³GaN platform is based on transistors with low resistance and fast switching speeds, enabling compact, high-efficiency converters that outperform traditional silicon-based solutions.

In recent months, the company introduced the second generation of its GaN devices, achieving a significant reduction in RDS(on)—the electrical resistance of the transistor when conducting. Lower resistance translates into higher efficiency, reduced heat, and more compact, energy-efficient system designs.

At the same time, VisIC has strengthened its supply chain by expanding production to additional semiconductor fabs, reducing reliance on a single supplier and improving manufacturing resilience. The company has also launched a new global distribution channel through DigiKey, enabling broader and more direct access to its GaN components for customers worldwide.

AI-Powered Malaria Control: Diptera.ai Reaches Field-Validation Milestone

[Photo: Dr. Eli Ordan and Dr. Ariel Livne, co-founders of Diptera.ai, alongside Dr. Filipos Papathanos of the Hebrew University. Courtesy of Diptera.ai]

Diptera.ai’s flagship project has reached a major breakthrough. After three years of intensive development—and in partnership with the BIRD Foundation, which supports industrial collaboration between Israel and the United States—the Israeli company has completed a full end-to-end system based on the Sterile Insect Technique (SIT), a biological method for mosquito control that relies on releasing sterilized males.

The project delivered a complete operational chain for Anopheles mosquitoes, including rearing, sorting, sterilization, marking, and AI-driven monitoring. The system was subsequently validated on live mosquito populations in Kenya and a second African country. With each technological milestone now achieved, Diptera.ai is preparing for the next step: widescale field trials across Sub-Saharan Africa.

The company was founded in response to the urgent global need to curb the spread of malaria, a disease that kills hundreds of thousands each year. Traditional control methods are losing effectiveness, while SIT offers a clean, targeted alternative that avoids chemical pesticides. Historically, large-scale SIT deployment has been constrained by the inability to sort larvae at high throughput, the lack of reliable training data for AI models, and insufficient real-time monitoring tools.

Diptera.ai has now removed these bottlenecks. The company developed optical-imaging systems capable of detecting sex organs in larvae, a water-flow mechanism for processing large volumes, and a fully automated AI pipeline that generates high-quality labeled datasets without manual intervention. In partnership with U.S.-based Vectech, the team also created the Scout trap—equipped with UV illumination and machine-vision algorithms—to identify marked mosquitoes in the field and map population dynamics in real time.

During the project, the work expanded to additional major Anopheles species, including gambiae and coluzzii, significantly increasing market impact. All components—rearing, sorting, sterilization, and field hardware—were re-engineered, tested, and adapted to local conditions in semi-natural African facilities. Results demonstrate full technological readiness: near-perfect larval sex-sorting accuracy, 97% precision in detection models, and reliable field monitoring through the Scout platform.

Diptera.ai is now targeting initial deployments in multiple African countries, in collaboration with governments, researchers, and global health organizations. Beyond the technological achievement, the company views this milestone as a foundation for a systemic shift in how malaria is fought—moving mosquito control toward a fully automated, data-driven, AI-enabled discipline.

Dr. Eli Ordan, Diptera.ai co-founder, said:
“The support of the BIRD Foundation is instrumental in accelerating our joint effort with Vectech to combat the growing threat of malaria in Sub-Saharan Africa. This cross-border collaboration allows us to merge advanced AI-driven monitoring with cutting-edge vector-control technologies, creating a scalable solution for malaria prevention. BIRD’s involvement is not only a vote of confidence in the urgency of the mission, but also a catalyst that helps transform scientific innovation into real-world impact.”

Omer Carmel, Director of Business Development at the BIRD Foundation, added:
“BIRD’s investment and guidance in this project highlight our commitment to breakthrough technologies capable of addressing critical global challenges such as malaria prevention. Working with Diptera.ai and Vectech enabled us to support AI-based solutions that deliver tangible results in the field, demonstrating BIRD’s role as a bridge for groundbreaking collaboration between Israeli and American companies.”

Retia and TSG Cooperate on Multisensor Aerial Solutions

RETIA, a Czech defense company within the CSG Group, is setting up a new demonstration and integration center in Eastern Europe built around technologies from Israel’s TSG. As part of the initiative, TSG will supply licenses for core components of its systems—valued at approximately NIS 4.4 million—including advanced solutions for creating multi-sensor aerial situational awareness at low altitudes, as well as tactical command-and-control platforms designed to counter drones and unmanned aerial threats.

The new center will serve as a testing, demonstration, and integration hub for technologies that detect, classify, track, and intercept aerial targets in the low-altitude layer. It is intended to support the Czech Republic’s broader effort to upgrade its air-defense capabilities in line with NATO requirements and the renewed militarization trend across Eastern Europe. The site will integrate RETIA’s radar, sensors, and interception systems with TSG’s software and C2 solutions.

This order marks the first project under the strategic cooperation agreement signed earlier this year between the companies. Under the partnership, RETIA and TSG are also jointly competing in tenders to supply air-defense command-and-control systems to the air forces of both the Czech Republic and Slovakia.

TSG President Pini Yungman said the initiative represents a major milestone in the company’s entry into the European market amid rising demand for defenses against rocket and drone threats. RETIA CEO Jan Mikulcký added that integrating TSG’s tactical C2 capabilities strengthens CSG’s position in the air-defense domain and contributes directly to the Czech Republic’s national security.

NVIDIA Unveils an Open and Transparent Autonomous Driving Model

At this week’s NeurIPS conference, NVIDIA launched DRIVE Alpamayo-R1, a new autonomous-driving model described as the first industry-scale VLA (Vision-Language-Action) system to be released in open source. VLA refers to a model architecture that integrates visual perception, scene understanding, causal reasoning, and action planning into a single continuous framework.

The announcement marks a significant shift for the company. While NVIDIA has spent recent years building its AV efforts around dedicated hardware platforms such as DRIVE Orin and DRIVE Thor, it had never before opened a core driving module to the broader research community. For the autonomous-driving world — where closed, proprietary decision-making systems dominate — this is a notable milestone.

A Unified Model With Causal Reasoning at Its Core
Alpamayo-R1 is an end-to-end autonomous-driving model that simultaneously performs computer vision, scene comprehension, causal reasoning, and trajectory planning. Unlike traditional AV architectures that separate perception, prediction, and planning, AR1 uses a unified VLA structure that stitches the layers together into a single, continuous decision pipeline.

At the heart of the model lies causal reasoning — the ability to break down complex driving scenarios, evaluate multiple “thought paths,” and select a final trajectory based on interpretable internal logic.

According to NVIDIA, AR1 was trained on a blend of real-world data, simulation, and open datasets, including a newly introduced Chain-of-Causation dataset in which every action is annotated with a structured explanation for why it was taken. In the post-training phase, researchers used reinforcement learning, yielding a measurable improvement in reasoning quality compared with the pretrained model.

The model will be released for non-commercial use on GitHub and Hugging Face. NVIDIA will also publish companion tools, including AlpaSim, a testing framework, and an accompanying open dataset for AV research.

Open vs. Closed Models

Today’s autonomous-driving systems largely fall into two categories. Tesla uses an end-to-end Vision → Control approach, in which a single model processes camera input and outputs steering and braking commands. Tesla’s model is not open, does not provide explicit reasoning, and is not structured around a clear division between reasoning and action.

Mobileye, by contrast, maintains a more “classic” perception-prediction-planning stack built on semantic maps, deterministic algorithms, and safety rules. But Mobileye’s models are also closed systems that offer no external visibility into their decision-making logic.

This is where AR1 stands apart: it provides explicit, interpretable reasoning traces explaining why a particular trajectory was chosen — something rarely seen in AV systems, and never before at industrial scale.

The significance of making such a model open extends far beyond academia. Commercial AV stacks are black boxes, which makes regulatory evaluation, cross-model comparison, and stress-testing in rare scenarios difficult. By opening a reasoning-based driving model, NVIDIA enables transparent, reproducible experimentation — much like what Llama and Mistral have done for language models.

A Shift Toward a New Paradigm
AR1 signals a broader shift: autonomous driving is evolving toward a domain where general-purpose intelligence models play a central role, replacing rigid, hand-engineered pipelines. While there is no evidence yet that a unified VLA model can replace the entire AV stack, this is the clearest move to date toward what could be called a “physics of behavior” — an effort to understand not only what the car sees, but why it should act in a certain way.

The announcement also aligns with NVIDIA’s hardware strategy. As models become larger, more compute-intensive, and increasingly reliant on high-fidelity simulation, the case for using NVIDIA’s platforms only strengthens.

Alpamayo-R1 is not a full autonomous-driving system, but it is the first time that the cognitive heart of such a system — its decision-making logic — is being opened to researchers, OEMs, and startups. In a field long defined by closed-door development, that alone is a meaningful breakthrough.

Advantech Expands Edge AI Strategy, and Market Engagement in Israel

Sponsored Content by Advantach

Advantech is expanding its Edge Computing + Edge AI strategy and strengthening its global partner ecosystem to accelerate the deployment of intelligent applications across industries, as AI, IoT, and edge computing enter a golden decade of integrated adoption. According to IDC and Gartner, the global Edge AI market is expected to surpass USD 500 billion by 2034. 

Edge Computing and AI: From Industrial Control to Intelligence 

With nearly 50 years of industry experience, Advantech has evolved from a Taiwan-based IPC manufacturer into a global industrial automation and AIoT leader. In recent years, the company has identified Edge Computing and Edge AI as its key engines for future growth. To accelerate digital transformation across sectors, Advantech has integrated AI and edge computing into its industrial platforms and continues to develop scalable, cross-industry intelligent solutions.

In May 2025, Advantech returned to COMPUTEX 2025 under the theme “Edge Computing & WISE-Edge in Action,” showcasing its latest edge intelligence applications across manufacturing, energy, transportation, and healthcare. The event gathered nearly 2,000 partners and industry leaders worldwide, underscoring Advantech’s commitment to industrial innovation.

Vincent Chang, Managing Director of Asia and Intercontinental Region, emphasized that the company is shifting from a product-driven model to an industry-driven strategy—focusing on five key verticals: Edge intelligent systems, Smart manufacturing, Energy and utilities, Smart healthcare and Smart retail & services.

Through the WISE-Edge platform, Advantech connects hardware, software, and industry expertise to enable real-time insights and accelerate innovative edge applications, playing a critical role in the global shift toward intelligent transformation.

From Cloud to Edge: Advantech’s Ecosystem Strategy

As AI models grow more and real-time processing becomes critical, enterprises worldwide are moving from cloud-only architectures toward edge computing for efficiency and privacy. Advantech is leading this transition with:

Comprehensive Edge AI Portfolio: Industrial-grade edge devices supporting AI inference, machine vision, and real-time analytics.

WISE-Edge Platform: Integrated toolchains from NVIDIA, Qualcomm, Intel, NXP, and other major chipmakers for rapid deployment in manufacturing, healthcare, retail, energy, and robotics.

Global Tech Partnerships: Collaborations with NVIDIA, Microsoft, AWS to enhance cloud–edge orchestration, AI SDK integration, and end-to-end management frameworks.

Together, these capabilities form a scalable Edge AI ecosystem that accelerates adoption across industrial automation, medical imaging, smart retail, energy management, robotics, and smart city applications.

Expanding Edge AI Partner Recruitment in Israel

Advantech is deepening its presence in Israel—one of the world’s most advanced innovation hubs. Advantech has built a solid foundation in Israel over many years, collaborating with key partners such as Eastronics, Dor Engineering, and ICPC. Through channel development, SI collaboration, and global resource integration, Advantech supports multi-level market engagement and provides ODM/OEM customization and localized engineering services that help customers innovate from design to deployment..

Israel—recognized globally for its leadership in AI, cybersecurity, medical technology, and deep-tech innovation—is experiencing rapid acceleration in AI-driven automation. This is driving strong demand for edge AI platforms, machine vision systems, device connectivity, and industrial-grade computing. Advantech aims to deepen collaboration with Israeli partners to expand Edge AI deployment across key sectors.

To empower partners and maximize market opportunities, Advantech offers: Technical training and product onboarding, Joint marketing programs (JMF), Proof-of-concept (PoC) collaboration and Global customer matchmaking. Advantech welcomes Israeli distributors, SIs (System Integrators), and AI solution providers to join its global ecosystem and drive the next wave of intelligent transformation.

For more information:

Advantech Channel Partner Network in Israel