Artificial intelligence chip company Hailo Technologies Ltd. is going to launch two new acceleration modules with the aim of boosting the processing capabilities of edge devices that run on major hardware.
Earlier, in 2019, Hailo-8 was introduced, a customized processor for running deep learning workloads at the edge of the network. The Hailo-8 processor is smaller than a penny. It has been built from scratch with an entirely redesigned memory, control, and computer architecture components. This provides higher performance, consumes lower power, and has minimal latency. Using the Hailo-8 Deep Learning chip, autonomous vehicles, smart cameras, drones, and AR/VR platforms can run various complex deep learning applications at the edge.
Now, two new highly efficient AI acceleration modules, namely M.2 and Mini PCIe, will support major Deep Learning frameworks like TensorFlow and the Open Neutral Network Exchange, as quoted by the company. They are expected to be integrated with its Hailo-8 processor to accelerate different Deep Learning applications. These can have excellent capabilities, such as “fan-less AI edge boxes” that are expected to connect numerous sensors to a single intelligent processing device at the edge. The modules will also allow the processor to perform 26 tera-operations per second, with a power efficiency of 3 TOPS/W.
Hailo’s CEO, Orr Danon, says that the new modules will prove to be a game-changer for devices at the edge. He says that these two modules will empower companies worldwide to create cost-efficient and more effective and innovative AI-based products while staying within the system’s thermal constraints.