Nvidia and Mellanox built a Supercomputer in just a Month

Photo above: Mellanox’ AI platform protects supercomputers from from hacking and inappropriate use

In a first joint announcement by Nvidia and Mellanox, the two companies announced a reference design for the rapid building of supercomputers, and a new cyber protection platform for supercomputers. Mellanox has expanded its offering of Unified Fabric Manager (UFM) products, adding to it a new appliance called UFM Cyber-AI Platform.

It provides cyber protection to supercomputers and big data centers, using an artificial intelligence software that studies the behavior characteristics of the computing systems, to identify malfunctions and detects abnormal activity that implies on hacking and unauthorized activity.

Originally, UFM technology was developed a decade ago by Mellanox in order to manage InfiniBand-based communications systems by providing network telemetry data, monitoring the activity of all the related devices, and managing the software updates across the network’s components.

The new solution comes both as a software package or as a complete appliance based on Nvidia’s dedicated server. It is focused on characterizing computer operation and identifying unusual activity. According to Nvidia and Mellanox, the system significantly reduces the data center’s downtime, whose damages are estimated to reach $300,000 per hour.

Supercomputers are open and unprotected platforms

According to Mellanox’s VP of Marketing, Gil Shainer, the integration of Mellanox’s InfiniBand with Nvidia’s GPU changes the rules of the game in the supercomputer market, bringing to it unprecedented cyber security and preventative maintenance capabilities. Shainer: “Supercomputers are managed differently from organizational computer centers. Usually it is an open platform that need to provide easy access to many researchers around the world.”

To illustrate the dilemma he recalled an event that took place several years ago at an American university. “The administrator of the computers center told me how they caught a student using a computer for crypto mining. The suspicion emerged when they found out that the computer’s power consumption was not declining during the annual vacation, a period of time in which the computer usually is not active. Our solution allows you to detect such a situation right away – and not have to wait for your computer’s power bill.”

Reference Design for the Rapid Construction of Supercomputer

Alongside the joint announcement, Nvidia unveiled a new supercomputer called Selene (photo above), which is considered the strongest industrial supercomputer in the United States, with peak performance of 27.5 petaflops. The computer is based on the new A100-model GPU processors announced this week, and was built for internal research conducted in Nvidia. During a press briefing last week, Shainer revealed that the new computer was built in just one month, a record-breaking time for the construction of a supercomputer.

Shainer: “The ability to build a supercomputer in a month is based on expertise in communication and expertise in processors. We have developed a reference design that allows anyone to build a supercomputer, based on ready made blocks of Nvidia’s processors and Mellanox’s communication. Because the processors are fully compatible with the communications cards, the computer can be set up in no time. In fact, we have jointly developed a reference design that allows for the construction of computers of any size – not just supercomputers.”

BMW-Mercedes Break up is bad news for Intel/Mobileye

Photo above: BMW impression of highway autonomous driving

Less than a year since the German Automotive giants BMW Group and Mercedes-Benz AG agreed to work together on a joint development program of next-generation technologies for driver assistance systems and automated driving, they decided to halt the cooperation and to take different paths. Last week they announced that they are putting their cooperation in automated driving “temporarily on hold”.

The original agreement raised many expectations: On July 2019, the two parties announced an agreement for a long-term strategic cooperation, which will include joint development of driver assistance systems, automated driving on highways and automated parking (SAE Level 4). They planned to bring together more than 1,200 specialists from both companies, often in mixed teams, to develop a scalable architecture for driver assistance systems, including sensors, as well as a joint data centre for data storage, administration and processing, and functions and software.

Intel/BMW vs Mercedes/NVIDIA

For Intel and Mobileye (owned by Intel) it was a great opportunity: They both have a long and deep cooperation with BMW Group in all aspects of Autonomous Driving, and the agreement could secure their dominant position in the German car industry. “We have systematically further developed our technology and scalable platform with partners like Intel, Mobileye, FCA and Ansys,” said Klaus Fröhlich, member of the Board of Management of BMW. “Our current technology, with extremely powerful sensors and computing power, puts us in an excellent position.”

But those hopes were short lived: “Digitalization is a major strategic pillar for Mercedes-Benz. To prepare for the future challenges of a rapidly changing environment, we are currently also sounding out other possibilities with partners outside the automotive sector,” said Markus Schäfer, Board Member of Daimler AG and Mercedes-Benz.

And it turned out that one of these “partners” is NVIDIA – a bitter competitor of Intel and Mobileye. On Tuesday, June 23, they announced a cooperation to create a revolutionary in-vehicle computing system and AI computing infrastructure. Starting in 2024, this will be rolled out across the fleet of next-generation Mercedes-Benz vehicles.

The new software-defined architecture will be built on the NVIDIA DRIVE platform and will be standard in Mercedes-Benz’s next-generation fleet. But there is a twist: NVIDIA and Mercedes-Benz will jointly develop the AI and automated vehicle applications for SAE level 2 and 3 – far below the ambitious goal of the original BMW/Mercedes coalition.

Weebit Nano raised $4.5 million to commercialize ReRAM technology

Hod Hasharon (near Tel Aviv)-based Weebit Nano, which is developing a new type of non-volatile ReRAM technology, has raised $4.5 US million through a private allotment of shares on the Australian Securities Exchange (ASX). At the moment, the company is trying to raise an additional $0.5 AUD million through a public offering. Weebit Nano’s CEO, Coby Hanoch, told Techtime that this financial round ,”Will allow us to move towards commercialization, and hopefully, within a year we’ll already be engaging in serious interactions with potential customers.”

According to the report supplied by the company to the ASX, about half of the money raised will be allocated for the development of a dedicated module for embedded systems, the company’s first target market for its ReRAM technology. “Our technology has already been proven and tested by customers. We are now developing a specific module of the memory, in order to make it suitable for the embedded systems market.”

The Best of all Possible Worlds

Approximately 20% of the amount will be allocated for the development of a component called ‘Selector’, which is designed to minimize leakage currents between the memory cells, and about 15% for transferring the technology for production at standard Fab manufacturing facilities. Weebit Nano is developing a new Resistive Random Access Memory (ReRAM), based on the use of materials that change their electrical resistance in response to electrical voltage, thus “remembering” the voltage levels after they are disconnected from the power source.

It combines the non-volatility of flash memory with the fast, low-power, and long life cycle of the volatile DRAM memory technologies. The company estimates that its prototype is 1,000 times faster and uses 1,000 times less power than flash memory, traits which make him perfect candidate for IoT, artificial intelligence, information centers and more.

Lately, Weebit Nano announced first commercial collaborations, both with the Chinese semiconductors companies, XTX and SiEn. Together, they will examine the integration of the Weebit Nano’s memory component into their’ products. “China is the largest chip consumer in the world, and is determined to build an independent semiconductors industry,” said Hanoch.

Altair’s new business: AI DSP Engines

Photo above: Sony intelligent vision sensors IMX500 (left) and IMX501. Both include Altair’s DSP processor

Hod Hasharon-based Altair Semiconductor (owned by Sony) has secretly expanded its operations beyond the IoT sector and entered the Artificial Intelligence (AI) chips market. This came to light last month, when Sony announced new image sensors for smart control systems . The component is built of two chips embedded in stacked configuration inside a single package (Multi Chip Module) consisting of a Sony image sensor, and a DSP processor developed by Altair, which is responsible for a neuronal network inference operations.

This new family of smart image sensors is currently consists of two components: IMX500 and IMX501. When installed in a security camera, street camera, or other IoT devices, the logic circuit processes and sends only the inference itself to the network center. Thus, it saves considerable processing and communication resources and enables a given device to function as a smart sensor without compromising the privacy of the people being photographed.

A smart camera equipped with the visual-logic sensor can enumerate the number of people in the store and transmit the information without having to send their images to the cloud. It can discern congestion patterns in various complexes, and even track customer behavior in the store –  based only on analyzing their movements –  and without having to identify the customers themselves.

The images are sent back in a variety of configurations (see below): pure decoded information without visual elements, an image in various formats, or only the relevant visual area. From Sony’s point of view, this constitutes an entrance to a major market characterized by a very large growth. As far as Altair is concerned, this is a very surprising development, since so far the company has focused on communication solutions for IoT devices and not on the development of DSP or artificial intelligence processors.

Altair’s core activity is focused on IoT connectivity chips, with its flagship product being the ALT1250 chipset, which includes a modulator and a modem for supporting Cat-M1 standard and the NB-IoT standard. It features an RF front end circuit that supports all LTE bands, an RFIC circuit, a power management unit (PMU), memory, amplifier circuits, filters, an antenna switch, global navigation satellite system (GNSS), hardware-based security, an eSIM circuit and an internal micro controller unit (MCU) that allows customers to develop unique applications.

A new strategy for both Altair and Sony

Sony’s announcement positions it in a massive market and transforms it into ahybrid IoT-image-sensors player. The move can secure orders for Altair in very large quantities. However, it can also hint at a new Altair strategy that can develop in two interesting directions: the first is the integration of ALT1250 technologies into Sony’s future image sensors – alongside the recently unveiled AI processor.

The other direction is independent: integrating the artificial intelligence processor into its next-generation connectivity chip – a kind of ALT1250 reinforced with artificial intelligence. An IoT connectivity chip embedded with artificial intelligence has many advantages – from providing artificial intelligence to ‘dumb’ cameras – thus allowing enhanced communication management capabilities – and even enhancing the current-generation ALT1250 security system.

Connected Devices in an Era of Pandemics

By: Igor Tovberg, Director of Product Marketing at Altair Semiconductor, a Sony Group Company

Technology has a history of helping to track and treat viruses. And, with the World Health Organization (WHO) declaring COVID-19 a global pandemic, people are rightly asking themselves how new technologies such as the Internet of Things (IoT), AI, and Big Data can be employed to slow down the proliferation of pandemics and avoid a future global health crisis.  In this article, I describe how connected medical devices could help.

Monitoring trends with Wearables

Millions of wearable devices have been deployed globally. Activity and heart-rate sensing are becoming a baseline feature in every fitness band and smartwatch, with data being continuously sensed and uploaded into the cloud. Would this data be useful in predicting a spreading epidemic?

Indeed, a recently published study by Scripps Research Translational Institute in The Lancet Digital Health analyzed such data and found that resting heart rate and sleep-duration data collected from wearable devices could help inform timely and accurate models of population-level influenza trends. Sensing and analyzing more physiological factors would improve the speed and accuracy in the discovery of epidemics.

Changes in patient care habits

Isolation is one of the preventive actions being taken to stop the virus spread, as exposure to an infected carrier could prove fatal for people with a weakened immune system. Now, more than ever, health stats relating to virus symptoms can be sent to health care providers without patients having to visit their clinic and risking exposure.

mHealth

Connected devices such as thermometers, blood pressure meters, inhalers, glucose meters, or other personal health monitoring devices will play a significant role in protecting people’s lives.

Cellular connectivity through the CAT-M or NB-IoT network can ensure a secure and reliable countrywide link for the delivery of patients’ stats to their health care provider from any location, regardless of WiFi/BLE coverage.

Connected out-of-the-box cellular-based devices are freeing doctors from relying on a patient’s ability to set up the LAN/PAN connection by themselves.

Quarantine compliance with smart cellular IoT wristbands

The general population can wear smart wristbands as a health monitor. With an emphasis on the small size and long battery life, Cellular IoT offers reliable connectivity for smart wristbands, with autonomy from paired smartphones. Recently, the Hong Kong Government has deployed smart wristbands to monitor city residents quarantined inside their homes.

Accelerating the speed of reaction

Monitoring is vital in the detection chain, and reaction time is critical for prevention. Enterprises, airports, and cities would surely benefit from monitoring devices for citizens, and healthcare facilities would benefit from the ability to monitor remote patients. Timely discovery of outbreaks could prevent many new dangerous viruses in the future.

Solution

For personal, medical, or environmental monitoring, Altair’s ALT1250 ultra-low power, compact, secure, and highly integrated cellular IoT chipset enables slimmer devices with long battery life, which can remain continuously connected – reliably connecting people in ways previously unobtainable. All without the need for a smartphone or home WiFi network.

Conclusion

According to Bill Gates, in any crisis, leaders have two equally important responsibilities: Solving the immediate problem and keeping it from happening again. It’s clear that IoT technology, and specifically medical devices, have an important role to play in the containment and treatment of outbreaks like COVID-19. I genuinely believe that IoT can be fully harnessed to control and potentially prevent the next global pandemic.

AI Chipmaker Hailo Raised $60 Million

The Tel Aviv based chipmaker startup Hailo, has successfully completed a a $60 million financing round with the participation of key strategic investors ABB Technology Ventures (VC arm of ABB) and NEC Corporation. The funding will be used to enter mass production of the company’s processor Hailo-8 Deep Learning chip during 2020. Today the company employs approximately 80 employees.

Since its inception in February 2017, Hailo had raised $88 million. Following the last investment round, it begins to recruit 30-40 new employees for its research and development and support department, as well as offshore stuff for new offices in Europe, Japan and the US, to be opened in 2020. The company’s Hailo-8 processor is a dedicated Neural Networks processor aimed to implement inference functions on edge devices in Automotive and Industrial applications.

“We look forward to combining Hailo’s solution with our cutting-edge industrial technology as an important piece of the puzzle to drive the digital transformation of industries,” said Kurt Kaltenegger, Head of ABB Technology Ventures. Hiroto Sugahara, General Manager of Corporate Technology Division, NEC Corporation, said that Hailo’s technology will help NEC,  “to dive  deeper into the intelligent video analytics market. We look forward to incorporating Hailo’s technology into our next generation edge-based products.”

CEO, Orr Danon: A novel architecture to enable fast and power saving implementation of Neural Networks
CEO, Orr Danon: A novel architecture to enable fast and power saving implementation of Neural Networks

The Company’s co-founder and CEO, Orr Danon, told Techtime Hailo-8 chip presents a novel architecture to enable fast and power saving implementation of Neural Networks. “We identified that in during the processing of inference, there are differences in the behavior of the different layers in the neural network. Our solution provides the exact resources needed in each layer.

“In contrast, our competitors, who use solutions such as GPU processors, allocate to each and every layer the same level of resources. Our development software learns the specific problem of each application, characterizes it, and give the chip instructions on how to manage the resources of each layer in an optimal method.”

According to Hailo, its processor reaches up to 26 Tera Operations Per Second (TOPS) and 3 TOPS per Watt. It will meet the strict ISO 26262 ASIL-B as well as the AEC Q 100 Grade 2 standards. Hailo-8 is comprised of four main components: an Image Signal Processor to improves the image arriving from the sensor before its transfer for processing by the neural network core, an H.264 encoder that handles the video stream, an ARM-M4 processor to manages the chip, and the neural network core itself.

This neural network is comprised of a flexible matrix of software-configurable processing, controls, computational resources and memory units. “Hailo’s Deep Learning chip is a real game changer in industries such as automotive, industry 4.0, robotics, smart cities, and more,” said Hailo Chairman Zohar Zisapel, “A new age of AI chips means a new age of computing capabilities at the edge.”

Arbe Raised $32M for New Automotive 4D Radar Chipset

Arbe from Tel-aviv, announced the closing of $32 million in Round B funding for its 4D Imaging Radar Chipset Solution. Arbe will use the funding to move to full production of its automotive radar chipset, which generates an image 100 times more detailed than any other solution on the market today.

Founded in 2015 by an elite team of semiconductor engineers, radar specialists, and data scientists, Arbe has secured $55 million from leading investors, including Canaan Partners Israel, iAngels, 360 Capital Partners, O.G. Tech Ventures, Catalyst CEL, AI Alliance, BAIC Capital, MissionBlue Capital, and OurCrowd. Arbe is based in Tel Aviv, Israel, and has offices in the United States and China.

The Tel-aviv based company has developed a 4D Imaging Radar Chipset Solution, enabling high-resolution sensing for ADAS and autonomous vehicles. Arbe’s technology produces detailed images, separates, identifies, and tracks objects in high resolution in both azimuth and elevation in a long range and a wide field of view, and complemented by AI-based post-processing and SLAM (simultaneous localization and mapping).

Its Phoenix radar chip supports more than 2000 virtual channels, tracking hundreds of objects simultaneously in a wide field of view at long-range with 30 frames per second of full scan. The company believes its solution pose a low cost alternative to the current LiDAR sensors in ADAS Systems and the future Autonomous Vehicles.