BlueField Chip Brings Mellanox to the Storage Fabrics
15 August, 2017
The new communication chip consists of ConnectX-5 network controller and a 16 core ARM processor. It is aimed to bring Mellanox into the Storage and the new markets of Artificial Intelligence
With Mellanox, every new silicon chip is a strategic corner for a family of products. This is why the new BlueField System on Chip of Mellanox, announced last week, is so interesting. BlueField integrates all the technologies needed to connect NVMe over Fabrics flash arrays. It provides 200Gbps of throughput and more than 10 million IOPS in a single SoC device. BlueField integrates the Mellanox ConnectX-5 network controller, a 16 core 64-bit ARMv8 A72, Ethernet/InfiniBand adapter and an integrated PCIe switch with up to 32 lanes of PCIe Gen 3.0/4.0.
Most interestingly, it consists of NVIDIA GPUDirect RDMA, to enable peer-to-peer communication between BlueField and third-party hardware such as GPUs. Mellanox explained BlueField enables efficient deployment of networking applications, both in the form of a smart NIC and as a standalone network platform, but it also indicated two new potential markets for the SoC: Storage and Machine Learning.
Mellanox targets storage platforms, such as NVMe over Fabrics (NVME-oF) All-Flash Array (AFA) and storage controller for JBOF, server caching (memcached), disaggregated rack storage, scale-out direct-attached storage, and storage RAID. These technologies rise as more and more data centers moves from Hard drive magnetic storage systems, to semiconductors based systems. They are faster, cheaper, but need close control to maintain data integrity and fast inter connections.
While the data centers change their storage characteristics, they also adopt new structural elements to allow effective Artificial Intelligence and Machine Learning application. It involves more GPUs inside the data centers, and even clusters of thousands of GPUs. The BlueField SoC was desgined to answer those needs with its PCIe Gen 3.0/4.0 interface, RDMA and GPUDirect RDMA technologies. Mellanox even developed its own solution for fast GPUs inter connectivity. It is a communication architecture called PeerDirect that supports peer-to-peer communication between BlueField and third-party hardware such as GPUs, co-processor adapters (Intel Xeon Phi), or storage adapters.