IBM and NeuReality team up to build AI Server-on-a-Chip

IBM and NeuReality, an Israeli AI systems and semiconductor company, have signed an agreement to develop the next generation of high-performance AI inference platforms that will deliver disruptive cost and power consumption improvements for deep learning use cases. IBM and NeuReality will enable critical sectors such as finance, insurance, healthcare, manufacturing, and smart cities to deploy computer vision, Natural Language Processing, recommendation systems, and other AI use cases. The collaboration is also aimed at accelerating deployments in today’s ever-growing AI use cases which are already deployed in public and private cloud datacenters

The agreement involves NR1, NeuReality’s first Server-on-a-Chip ASIC implementation of their revolutionary AI-centric architecture. NR1 is based on NeuReality’s first generation FPGA-based NR1-P prototype platform that was introduced earlier this year. The NR1 will be a new type of integrated circuit device with native AI-over-Fabric networking, full AI pipeline offload and hardware-based AI hypervisor capabilities. These capabilities remove the system bottlenecks of today’s solutions and provide disruptive cost and power consumption benefits for inference systems and services. The NR1-P platform will support software integration and system level validation prior to the availability of the NR1 production platform next year.

This partnership also marks NeuReality as the first start-up semiconductor product member of the IBM Research AI Hardware Center and licensee of the Center’s low-precision high performance Digital AI Cores. As part of the agreement, IBM becomes a design partner of NeuReality and will work on the product requirements for the NR1 chip, system, and SDK, that will be implemented in the next revision of the architecture. Together the two companies will evaluate NeuReality’s products for use in IBM’s Hybrid Cloud, including AI use cases, system flows, virtualization, networking, security and more.

The agreement with IBM marks the continued momentum for NeuReality. In February this year, the company emerged from stealth announcing its first-of-a-kind AI-centric architecture and laying out its roadmap, where NR1-P will be followed by NR1. Shortly after, in September, NeuReality announced that it is collaborating with Xilinx to deliver their new AI-centric FPGA based NR1-P platforms to market.

Moshe Tanach, CEO and co-founder of NeuReality, stated: “We are excited and deeply satisfied that a world-class multinational innovator like IBM is partnering with us. We believe our collaboration is a vote of confidence for our AI-centric technology and architecture and in its potential to power real life AI use cases with unprecedented deep learning capabilities.” Tanach added: “Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 Server-on-a-Chip’s tapeout. Being able to develop, test and optimize complex datacenter distributed features, such as Kubernetes, networking, and security before production is the only way to deliver high quality to our customers. I am extremely proud of our engineering team who will deliver a new reality to datacenters and near edge solutions. This new reality will allow many new sectors to deploy AI use cases more effciently than ever before.”

Dr. Mukesh Khare, Vice President of Hybrid Cloud research at IBM Research, said: “In light of IBM’s vision to deliver the most advanced Hybrid Cloud and AI systems and services to our clients, teaming up with NeuReality, which brings a disruptive AI-centric approach to the table, is the type of industry collaboration we are looking for. The partnership with NeuReality is expected to drive a more streamlined and accessible AI infrastructure, which has the potential to enhance people’s lives.”

IBM develops a Giant 1,000-qubits Quantum Computer

Above: Members of the IBM Quantum team at work. Credit: Connie Zhou for IBM

IBM announced an ambitious quantum computing roadmap that includes an array of 1,000 qubits (Qbit) quantum computer by the end of 2023. Today’s machinres consist of only a few dozen qubits. According to IBM, the number of qubits in quantum processors will double every year or two. In 2022 IBM will complete the development of a quantum processor with 400 cubits, and in 2023 it will launch a processor with 1,121 cubits to be called Condor.

IBM’s vision is very ambitious: “Our future computers will include more than a million qubits.” IBM is one of the most advanced players in quantum computing. In 2016, it was the first to offer public access to its quantum computer via the cloud. Today, IBM’s cloud provides access to more than 20 quantum computers of 5-qubits and 24-qubits. Earlier this year it launched a new 65-qubit quantum computer, which is the most powerful quantum computer to date.

“Super-fridges” for millions of qubits

IBM is using superconductors to build the new computers, due to their zero resistance at low temperatures. As part of the needed infrastructure, it will build a 10-foot-tall and 6-foot-wide super-refrigerator (to be called Goldeneye), which can accommodate arrays of 1,000 qubits. The long range goal is to build a network of interconnected “super-fridges” that together provide a computing capability of one million qubits.

These fridges keep the qubit array at a temperature close to absolute zero, in order to avoid any electromagnetic interference that may interrupt the quantum circuit. In quantum computing, the smallest radiation can destroy the computational process, thus the biggest challenge in developing a large quantum computer is the ability to preserve the quantum state of the qubits, until the computational process is completed.

Cracking the cholesterol mystery

Speaking with Techime, Nir Minerbi, CEO of the Israel-based Classiq which develops software solutions for quantum computing, explained the practical significance of IBM’s roadmap. “The very fact that a company like IBM, which does not usually release far-reaching statements, presents a detailed technological roadmap with clear goals–  increases the industry’s confidence in the future of quantum computing.”

According to Minerbi, quantum computing is a “tie-breaker” in exactly the types of problems that classical computers, and even supercomputers, have difficulty dealing with. “All the supercomputers in the world, together, will never be able to simulate a single cholesterol molecule. But a quantum computer with several hundred qubits will be able to do this, and will be able to test how different molecules react with cholesterol and to develop drugs.”

The next layer of Quantum Stack

Classiq is developing CAD solutions that will make it possible to write applications for quantum computers. “The quantum revolution consists of two things: hardware and software. Nowadays it is almost impossible to develop applications for a quantum computer, since you have to program at the logic gate level. It’s like designing a chip at the transistor level. We build the tools that allow developing applications at a higher level of abstraction. The next layer in the quantum stack.”

For Classiq, IBM’s roadmap is good news. “As computers get stronger, more companies are interested in developing applications for quantum computers. Today, the entire industry is looking at IBM’s statement. Now there is a clear horizon, and companies know that in a few years there will be quantum computers running significant algorithms. That’s why they are now starting to invest in software development and will need solutions like ours.”