Quantum X Labs Files Patent for Quantum-Accelerated Clinical Trial Analytics

By Yohai Schweiger

Israeli company Quantum X Labs has filed a U.S. patent application for a technology designed to accelerate and improve the analysis of clinical-trial data by integrating quantum computing. According to the disclosure, the invention centers on a computational algorithm built to handle large volumes of complex biological and clinical data, enabling more efficient inference about treatment responses, patient sub-populations, and probabilistic patterns using quantum capabilities.

The patent filing was reported on Wednesday by Geeks Internet, a publicly traded company on the Tel Aviv Stock Exchange. Geeks Internet holds approximately 26.42% of Viewbix, which is currently in the process of acquiring Quantum X Labs. Viewbix stated that Quantum X Labs—the acquisition target—submitted the U.S. patent application, and that development was carried out in collaboration with CliniQuantum, a portfolio company of Quantum X Labs focused on applying quantum computing to clinical-trial analysis.

The patent application is titled “Generating Quantum Markov Chain Monte Carlo Sampling Points for Continuous Distribution Functions.” The title offers a clear window into the technology’s core. At its heart lies a well-established statistical and AI framework known as Markov Chain Monte Carlo (MCMC)—a family of algorithms used for probabilistic sampling and inference when systems are too complex for direct calculation, such as those involving many variables and high uncertainty. In clinical trials, MCMC methods help model treatment responses, estimate probabilities, and uncover hidden patterns within noisy or incomplete datasets.

According to the public description, the innovation Quantum X Labs seeks to protect does not alter clinical protocols themselves, but rather the algorithmic sampling methodology. In a clinical-trial context, “samples” are not only lab measurements but computational representations of possible scenarios—different combinations of patient characteristics, treatment responses, dosages, and outcomes over time. MCMC algorithms rely on such samples to construct a probabilistic picture of how a therapy performs, even when data are partial or imperfect. The disclosed algorithm introduces a quantum component to generate these sampling points more efficiently and accurately from complex, continuous distributions. This approach could shorten computation times, improve statistical precision, and potentially enable comparable conclusions with fewer patients or fewer real-world measurements. In that sense, the patent targets the algorithmic and methodological layer, with a clear application to medical research.

Importantly, the algorithm is not intended to run entirely on a standalone quantum computer. Instead, it follows a hybrid architecture: the overall statistical computation runs on classical computers, while a quantum processor is integrated as an accelerator for specific stages—primarily the generation of complex sampling points. This design aims to leverage today’s available quantum advantages without depending on fully mature, large-scale quantum hardware.

Development was carried out in collaboration with CliniQuantum, which focuses on applying advanced computing—including quantum methods—to clinical-trial analytics. CliniQuantum positions itself as a platform for identifying patient sub-populations, response patterns, and biological signatures within complex datasets, supporting probabilistic decision-making throughout clinical studies. Quantum X Labs develops quantum technologies across multiple domains, centralizes the intellectual property, and leads the core algorithmic work, while CliniQuantum serves as the applied arm for clinical use cases.

[Image above: Construction of a commercial-grade quantum computer at IBM. Photo: IBM]

Intel Seeks Patent for Software-Defined “Supercore”

[Image: Intel Xeon 6 server processors]

By Yohai Schwiger

Intel has filed a U.S. patent application describing a new technology it calls the “Software Defined Supercore.” According to the filing, the company envisions a way to link several physical cores so they function as a single, massive core capable of executing many instructions in parallel.

The idea is to push CPUs closer to the kind of parallel processing long associated with GPUs—without the cost and complexity of designing a physically enormous core. A CPU core is the basic unit that executes software instructions. In early computers, there was only one. Today, most processors include multiple cores, allowing them to run different tasks at once.

Intel’s proposal would make multicore processing more flexible and dynamic: when an application demands concentrated compute power, several cores could be fused into one broad “supercore.” Once the demand subsides, they would return to operating independently.

The implications are especially relevant for artificial intelligence. GPUs have become the workhorses of AI training and inference thanks to their ability to handle thousands of parallel calculations simultaneously. Intel’s approach aims to give CPUs a similar advantage—enabling a core to “scale up” by tapping into additional cores to handle complex workloads, from AI to simulations and high-performance computing.

A Software-First Mindset
In some ways, the concept echoes Nvidia’s CUDA software environment, which allowed developers to tap into GPU architecture in smarter ways and helped transform GPUs into essential engines for AI and advanced computation. Intel is seeking to provide a comparable software layer, though here the goal is to orchestrate CPU cores rather than hundreds or thousands of GPU threads.

What makes this effort especially noteworthy is the signal it sends about Intel’s software ambitions. In the company’s most recent earnings call, new CEO Lip-Bu Tan admitted Intel had lost its edge in software innovation in recent years, vowing to bring it back to the forefront. The Supercore patent filing may be an early sign of that strategy, reminding the industry that Intel’s focus extends beyond silicon into the software that directs it.

Still, it is important to stress that this is only a patent application—not a finished product. Implementing such a concept would require significant changes at multiple levels, from hardware design to operating systems and developer tools. In other words, the Supercore remains an intriguing idea with clear potential, but one that could take many years to materialize—if it ever does—in Intel’s commercial processors.