Lightricks Goes Open Source with LTX-2, Taking on Big Tech in AI Video

Photo above: Lightricks CEO and co-founder Dr. Zeev Farbman. Credit: Riki Rahman. Photo illustration

Lightricks announced at CES the full open-source release of its generative video-and-audio model, LTX-2, including model weights and training code. The move is unusual in a market where advanced video models are largely controlled by closed cloud platforms. Announced in partnership with NVIDIA, the launch positions Lightricks as an open alternative to approaches led by companies such as OpenAI and Google, and signals a potential shift in how generative video technology is deployed and adopted.

LTX-2 can generate synchronized video and audio at up to 4K resolution, with clip lengths of up to 20 seconds and high frame rates. The model is optimized to run locally on RTX-powered workstations as well as on enterprise DGX systems, and is positioned as production-ready rather than a research demo. Unlike closed platforms such as Sora or Veo, Lightricks allows developers and organizations not only to use the model, but also to retrain, customize and integrate it directly into products and internal workflows.

Full open-source availability

While open video models already exist, most suffer from significant limitations, including lack of audio, lower visual quality or poor suitability for commercial use. LTX-2 is the first to combine full open-source availability with capabilities designed for real-world production, positioning it as a bridge between open research and the operational needs of the media and creative industries.

Lightricks is an Israeli company best known for its popular creative and editing apps, including photo and video tools used by millions of users worldwide. In recent years, the company has been expanding beyond consumer applications into the development of AI models and creative infrastructure aimed at professional creators and enterprise customers.

Behind the decision to open-source the model lies a clear business strategy. Lightricks is giving up exclusive control over the core technology in order to establish it as a standard platform others can build on. Rather than monetizing usage of the model itself, the company is positioning LTX-2 as the foundation for commercial tools, platforms and paid services developed on top of it. The approach mirrors familiar open-source business models in which economic value is created around the code rather than within it.

NVIDIA is not involved in developing the model itself, but plays a central role in positioning LTX-2 as a natural workload for RTX hardware and DGX systems. The partnership reflects a broader vision in which advanced generative video can and should run outside the cloud, on local workstations and within enterprise environments.

The release of LTX-2 reflects a broader shift in the generative video market, from closed models optimized for demonstrations and limited cloud-based access, toward open infrastructure designed for deep adoption and large-scale product development. Rather than focusing on producing the most eye-catching demo, Lightricks is aiming to provide the foundation on which the next generation of video creation tools will be built.

Rubin Pushes the GPU Off Its Pedestal

Above: The full Rubin platform. Source: Nvidia

At CES 2026 in Las Vegas, Nvidia unveiled Rubin, a platform it describes as “the next generation of AI infrastructure.” Rubin’s existence, and the fact that it is scheduled to reach the market in the second half of 2026, were already known. What this announcement revealed for the first time, however, was the idea behind it: not another generation of GPUs, but a deep conceptual shift in how large-scale AI systems are built and operated. Instead of a massive GPU at the center supported by surrounding components, Nvidia presented a complete architecture that functions as a single system, tightly integrating compute, memory, networking and security.

The recurring message is that Rubin is not a chip but a full rack-scale computing system, designed for a world in which AI is no longer a one-off chatbot but a constellation of agents operating over time, maintaining context, sharing memory and reasoning within a changing environment. In that sense, Rubin marks Nvidia’s transition from selling raw compute power to selling what is effectively a cognitive infrastructure.

Codesign as a principle, not a slogan

Nvidia has used the term “full stack” for years, but in practice it usually meant a collection of components built around the GPU. With Rubin, the concept of codesign takes on a very different meaning. This is not about tighter integration of existing parts, but about designing every element of the system—CPU, GPU, networking, interconnect, storage and security—together from the outset, as a single unit built to serve entirely new types of workloads.

The practical implication of this approach is that the GPU is no longer the architectural starting point. It remains a powerful and central component, but it is no longer the system’s unquestioned master. Rubin is designed around the assumption that the next AI bottleneck is not raw compute, but context management, persistent memory and orchestration across processes and agents. These are not problems solved by a faster GPU alone, but by redistributing responsibilities across the system.

In Rubin, architectural decisions are driven not by what the GPU needs, but by what the system as a whole must accomplish. This is a turning point for Nvidia, as it effectively moves away from the GPU-first mindset that has defined the company since the early CUDA era, replacing it with a system-level view in which compute is only one layer of a broader architecture.

The role of the CPU, and what it means for the x86 world

One of the most intriguing components in Rubin is the new Vera CPU. Unlike traditional data center CPUs, whose main role has been to host and schedule GPU workloads, Vera is designed from the ground up as an integral part of the inference and reasoning pipeline. It is not a passive host, but an active processor responsible for coordinating agents, managing multi-stage workflows and executing logic that is poorly suited to GPUs.

In doing so, Nvidia signals a profound shift in how it views the CPU in the AI era. Where the CPU was once largely a bottleneck on the path to the GPU, it now reemerges as a meaningful compute element—one that operates in symbiosis with the GPU rather than beneath it. The choice of an Arm-based architecture, and the fact that the CPU was designed alongside the GPU and networking rather than as a standalone component, point to Nvidia’s ambition to control the orchestration and control layer, not just the compute layer.

More broadly, the decision to use Arm reflects the need for flexibility and deep control over CPU design. Unlike general-purpose processors built to handle a wide variety of workloads, Arm allows Nvidia to tailor a processor precisely to the needs of modern AI systems, stripping away logic that is irrelevant to inference and agent orchestration. The implication is that the classic data center model—built around general-purpose x86 CPUs as the default foundation—is no longer a given for systems designed as AI-first from the ground up.

Memory, storage and the birth of a context layer

Perhaps the most significant architectural shift in Rubin lies in how inference context memory is handled. Nvidia introduced a new approach to managing the context memory of large models, particularly the KV cache generated during multi-step inference. In classical architectures, designed for short and isolated workloads, this memory had to reside in GPU HBM to maintain performance, making it expensive, scarce and ill-suited for long-running, multi-agent systems.

Rubin breaks this assumption by moving a substantial portion of context memory out of the GPU and into a dedicated layer that behaves like memory rather than traditional storage. This is also where the role of BlueField-4—the DPU derived from Mellanox networking technology—changes fundamentally. It no longer serves merely as an infrastructure offload engine, but becomes an active participant in managing context memory and coordinating access to it as part of the inference pipeline itself.

This shift reflects the gap between architectures built for training or one-off inference, and the needs of agent-based systems that operate continuously, preserve state and share context across components. In Rubin, memory and context management become integral to the inference performance path, not an external I/O layer—an adjustment that aligns closely with how modern AI systems are expected to function.

Connectivity also takes on a new role in Rubin. NVLink continues to serve as the high-speed internal interconnect between GPUs, but the Ethernet layer—embodied by Spectrum-6 and Spectrum-X—assumes a very different function than in traditional data centers. Instead of merely moving data between servers, the network becomes part of how the system manages compute and memory.

In this architecture, connectivity allows GPUs, CPUs and DPUs to access shared context memory, exchange state and operate as if they were part of a single continuous system, even when distributed across multiple servers or racks. Technologies such as RDMA enable direct memory access over the network without CPU involvement, turning the network into an active participant in the inference flow rather than a passive transport layer.

As a result, data movement, context management and inter-component coordination no longer happen “around” computation—they become part of computation itself. This is a prerequisite for distributed AI systems and long-running agents, where memory and state are as critical as raw compute.

This brings us back to the central theme of Nvidia’s announcement: the shift from training as the center of gravity to continuous, multi-agent inference. Rubin is designed primarily for a world in which most AI costs and business value reside in deployment, not training. In such a world, what matters is not only how fast you can compute, but how effectively you can remember, share and respond.

Rubin is, ultimately, Nvidia’s attempt to redefine the rules of AI infrastructure. No longer a race for TFLOPS alone, but a competition over who controls the entire architecture. If the strategy succeeds, Nvidia will not merely be an accelerator vendor, but a provider of full cognitive infrastructure.

10 Israeli Companies to Watch at CES 2026

By: Yohai Schweiger

CES 2026 opens this week in Las Vegas, and once again the Israeli presence is defined less by the number of companies and more by the direction they represent. From LiDAR moving behind the vehicle windshield, through neuromorphic chips that push AI directly into sensors, to AI agents that require enterprise-grade governance layers, alongside robots, sensing systems and digital health solutions already operating in the field, a clear pattern emerges. For Israeli companies, CES is becoming less of a stage for flashy demos and more a showcase for technologies designed to integrate into real-world systems, with demands for reliability, safety and scalability. Artificial intelligence is no longer the headline act, but a component embedded inside products and operational processes. The following ten Israeli companies offer a clear snapshot of this shift, from technological promise to deployable solutions.

Innoviz: Change the location, change the game – LiDAR moves behind the windshield

Innoviz arrives at CES with the first public unveiling of InnovizThree, a LiDAR sensor that aims to change not only performance metrics but the very role of LiDAR within the vehicle sensing stack. The key innovation is not incremental gains in range or resolution, but the ability to install the sensor behind the front windshield, inside the passenger cabin. For automakers, this location is considered strategically critical, offering better protection, simpler installation and maintenance, and cleaner integration into vehicle architecture. The move goes to the heart of the long-running debate around LiDAR, positioning it not as a controversial external add-on, but as an integral sensor alongside cameras and radar. Beyond automotive use, Innoviz signals that InnovizThree is also targeting additional markets such as humanoid robots, drones and physical AI systems, where compact, reliable and energy-efficient 3D sensing is required. Unveiling the sensor at CES is no coincidence, as the show has become a central platform for automotive technology innovation and a key meeting point between technology suppliers and automakers seeking mature solutions for next-generation sensing and autonomous driving systems.

Carteav: Modest autonomy that works today

Carteav approaches autonomous driving from a very different angle than what is often presented at technology exhibitions. While CES frequently serves as a stage for prototypes and long-term visions, Carteav arrives with a clearly defined and deployable solution for low-speed autonomous mobility in controlled environments. At CES, the company will showcase its transportation platform built around small electric autonomous vehicles and smart fleet management, designed for campuses, resorts, parks and gated communities. Rather than tackling the full complexity of autonomous driving on public roads, Carteav focuses on environments where autonomy can be deployed today, with lower regulatory and safety barriers. This pragmatic approach places the company within a broader trend of applied autonomy, favoring functional, near-term solutions over impressive but distant demonstrations, and highlighting how AI and fleet management can already deliver tangible operational value.

POLYN Technology: Neuromorphic intelligence directly on the sensor

POLYN is making its first appearance at CES with its edge AI technology based on analog neuromorphic chips. Founded in 2019, the company is developing an approach that enables intelligent processing directly on the sensor, before data is sent to a central processor or the cloud. The technology targets applications where ultra-low power consumption and fast response times are critical, including medical devices, wearables, industrial robotics and large-scale IoT systems. POLYN’s innovation goes beyond pushing computation to the edge; it redefines the role of the sensor itself. Instead of acting as a passive data source, the company’s neuromorphic approach turns the sensor into an active computational component capable of filtering, recognition and early decision-making at the hardware level. Presenting this technology at CES, rather than at an academic conference, signals a transition toward commercial readiness, positioning POLYN as an alternative to power-hungry digital accelerator-based AI models at a time when the market is seeking more efficient and scalable solutions for physical AI.

Avon AI: Managing AI agents as an organizational challenge

Avon AI is a very young Israeli startup operating in one of the fastest-emerging areas of artificial intelligence: the management and deployment of AI agents within enterprises. At CES, the company will present a platform designed to govern, monitor and operationalize intelligent agents in organizational environments, at a moment when many enterprises are moving from experimenting with models and chatbots to deploying AI systems that perform actions, make decisions and interact with sensitive data. Rather than building another model or conversational interface, Avon AI focuses on the control layer: who operates the agent, which systems it can access, what it is allowed to do, and how unexpected behavior can be monitored and mitigated over time. The platform enables organizations to treat AI agents as digital employees, with transparency, measurement and continuous improvement, rather than experimental code owned solely by development teams. The company’s debut at CES aligns with the expectation that AI agents will be one of the show’s dominant themes, reflecting AI’s broader shift from demos to enterprise production environments, where reliability, governance and compliance become baseline requirements.

bananaz AI: Artificial intelligence for mechanical engineers

While Avon AI addresses AI agents at the enterprise level, bananaz AI focuses on a very different audience: mechanical engineers and product development teams. At CES, the company will unveil its Design Agent for the first time, an AI agent that integrates directly into engineering workflows, reads CAD files and technical drawings, and helps identify design issues, compliance risks and manufacturing challenges early in the development cycle. Unlike general-purpose AI tools, bananaz’s agent is built on deep domain knowledge of mechanical engineering and the relationships between geometry, materials, manufacturing constraints and safety requirements. Showcasing the solution at CES, a consumer-oriented event that has increasingly become a platform for the entire product development pipeline, underscores the link between engineering tools and the physical products that ultimately reach consumers. For bananaz, this marks its first global exposure; for the market, it illustrates how AI agents are moving into the core of industrial design and engineering, not just customer service, marketing or analytics.

iRomaScents: Digital experiences with a sense of smell

iRomaScents is an Israeli startup operating in a less conventional corner of technology and user experience. The company has developed a smart digital scent delivery system that adds an additional sensory layer to content and consumer experiences. At CES, iRomaScents will present an advanced version of its solution, having already appeared at the show in previous years, with this year’s focus on deeper integration between scent, digital content and interactive experiences. The compact system, based on digitally controlled scent capsules, enables precise control over timing, intensity and personalization, allowing scent to become a natural part of the experience rather than a novelty effect. The company’s approach positions smell as a functional interface layer alongside screen and audio, opening the door to applications in consumer products, retail and home entertainment. Its return to CES with a stronger emphasis on commercial use cases reflects a broader shift from experimental multisensory experiences toward practical, scalable applications.

Validit AI: Real-time identity and intent verification

Validit AI is an Israeli startup operating at the intersection of cybersecurity, artificial intelligence and behavioral analysis. At CES, the company will present its platform for real-time identity verification and intent analysis, aiming to expand the concept of authentication beyond a single login event. Instead of relying on passwords, codes or one-time biometric checks, Validit AI continuously analyzes usage patterns and behavior to detect anomalies and prevent unwanted actions at an early stage. For organizations deploying AI in production and operating autonomous systems that handle sensitive data and processes, this challenge becomes increasingly critical as autonomy grows. Presenting at CES highlights the show’s evolving role as a venue not only for consumer products, but also for foundational technologies of trust and security in the AI era, particularly those addressing the risks introduced by intelligent and autonomous systems.

Smart Sensum: Metamaterials as next-generation sensors

Smart Sensum is an Israeli deep-tech company focused on sensing and wireless communication, developing smart radar and antenna systems based on metamaterials and metasurfaces. At CES, the company will showcase its compact mmWave radar systems and programmable antennas, designed to deliver more accurate sensing and communication in smaller form factors and with lower power consumption. Smart Sensum’s approach addresses long-standing bottlenecks in sensing technologies, including size, complexity and cost, by replacing parts of traditional RF electronics with intelligently engineered electromagnetic structures. The solutions target applications such as robotics, drones, industrial IoT and mobility systems, where reliable sensing and communication are essential for autonomous operation. Its presence at CES reflects the show’s growing emphasis not only on AI software, but also on deep sensing and communication technologies that form the physical foundation of intelligent systems.

Motion Informatics: Neurological rehabilitation with AI and AR

While CES is often associated with consumer innovation, the exhibition has increasingly expanded into digital health, where AI-driven and data-centric technologies are reshaping clinical practice. In this context, Motion Informatics will present its neurological rehabilitation solutions, combining AI, biofeedback and augmented reality. The company develops platforms that analyze muscle activity in real time using EMG data and adapt electrical stimulation and training protocols to the patient’s condition, creating an interactive and personalized rehabilitation process. The system enables patients to perform guided exercises at home or under clinical supervision, while AI models optimize neurological recovery over time. Motion Informatics’ presence at CES reflects the growing demand in digital health for practical, outcome-driven solutions, positioning the company within a broader wave of cross-industry technologies that merge AI, advanced sensing and patient-centered experience.

temi: Robotics as a service, not a gimmick

temi is one of the most established Israeli companies in the service robotics space, and it comes to CES not to present a futuristic concept, but to demonstrate the next stage in the evolution of its robot as a mature, deployable product. This year, the company will present an updated version of the temi platform, focusing on expanded capabilities, greater modularity and deeper integration with enterprise systems and AI applications. Already deployed in hotels, senior living facilities, lobbies and service environments, the robot is positioned as a practical tool that performs defined tasks such as welcoming guests, guiding visitors, providing information and connecting to existing management systems. temi’s presentation at CES highlights a broader shift in robotics, from general-purpose machines that showcase basic capabilities to service platforms that deliver clear operational value in real-world environments, positioning the company as a mature player in a market that is only now beginning to scale.