Future economic growth could be significantly impacted by AI chips, which will unavoidably be used in robotics, smart homes where electronic items are becoming more intelligent, cars that are becoming more autonomous, and many other technologies.
As the name suggests, AI chips are a new generation of microprocessors created primarily to handle tasks requiring artificial intelligence more quickly and efficiently.
The businesses that have demonstrated their technology and either have put them into production or are very close to doing so are those that we believe to be the leading developers of AI chips, but not necessarily in that order.
Top AI Chip Companies
In several industries, including cloud computing, data centers, mobile devices, and desktop computers, Google’s parent corporation is in charge of developing artificial intelligence technology.
The Tensor Processing Unit, an ASIC created explicitly for Google’s TensorFlow programming framework, which is mainly used for machine learning and deep learning, two fields of AI is probably the most notable feature.
The Edge TPU is designed for “edge” devices, which are devices at the edge of a network, like smartphones, tablets, and equipment used by the rest of us outside data centers. In contrast to Google’s Cloud TPU, a data center or cloud solution roughly the size of a credit card, the Edge TPU is smaller than a one-cent coin.
Apple has been creating its own semiconductors for some time and may someday quit relying on vendors like Intel, which would mark a significant shift in focus. Apple appears keen to forge its own path in the field of artificial intelligence, having previously essentially severed ties with Qualcomm following a protracted court battle.
The company’s most recent iPhones and iPads feature the A11 and A12 “Bionic” CPUs. According to claims, the A12 Bionic chip uses 50% less power while being 15% faster than its prior iteration. The chip employs Apple’s Neural Engine, a component of the circuitry inaccessible to apps from other developers.
All of the top technological companies, including Apple, use semiconductor designs created by Arm, or ARM Holdings.
It has an advantage over competitors because it is a semiconductor designer rather than a chip manufacturer, similar to how Microsoft benefited from not building its own computers. In other words, Arm has a lot of market sway.
The three main tracks that the company is currently developing AI chip designs along are Project Trillium, a new family of scalable, “ultra-efficient” processors targeted at Machine Learning Processor, which goes without saying, and Arm NN, a processor, made to operate with TensorFlow, Caffe, a deep learning framework, and other structures are two examples of machine learning applications.
As early as 2017, it was claimed that the largest chipmaker in the world was bringing in $1 billion from the sale of AI chips. Intel isn’t the biggest chipmaker in the world right now, but it definitely was back then.
And the processors under consideration in that report were of the Xeon family, which is not explicitly designed to handle AI but rather a generic processor that has been improved. Intel has created a line of AI chips named “Nervana,” which are referred to as “neural network processors,” in addition to perhaps making improvements to Xeon.
Nvidia appears to be in the lead in the market for GPUs, which, as we previously said, can execute AI workloads quicker than general-purpose computers. Similarly, the business seems to have earned an edge in the developing market for AI processors.
The two technologies appear to be tightly tied, with Nvidia’s improvements in GPU technology hastening the development of its AI processor. In fact, Nvidia’s AI products appear to be supported by GPUs, and its chipsets could be considered AI accelerators.
Nvidia offers various AI chip technologies to the market, including the Tesla chipset, Volta, and Xavier. All of these GPU-based chipsets are bundled into software-plus-hardware packages that are targeted toward particular needs.
Similar to Nvidia, AMD is a chipmaker that is closely linked to graphics cards and GPUs, in part because of the expansion of the computer games industry over the past few decades and due to the development of bitcoin mining lately.
For machine learning and deep learning, AMD provides hardware-and-software solutions like EPYC CPUs and Radeon Instinct GPUs. Radeon is a graphics processor primarily geared at gamers, while Epyc is the name of the processor AMD provides for servers, primarily in data centers. The Ryzen and probably the better-known Athlon are further AMD chips.
As a result of its general use as an internet search engine, Baidu is the counterpart of Google in China. Baidu has also entered new and exciting commercial sectors like driverless automobiles, which require solid microprocessors and AI chips. And to achieve this, Baidu announced the Kun Lun last year, calling it a “cloud-to-edge AI chip.”
We come at Graphcore, a new business whose primary goal is to create and deliver AI chips to the market, after listing seven long-established organizations whose primary activities are not actually aimed toward building AI chips.
The Rackscale IPU-Pod, based on the company’s Colossus processor and targeted at data centers, looks to be its primary product at the moment. However, it may develop more with the fictitious sum of money spent because that is where its future lies. The company has convinced companies like BMW, Microsoft, and other well-known brands to invest a combined $300 million in their enterprise, which is now estimated to be worth over $2 billion.
In this usage, the term “IPU” refers to an intelligent processing unit.
Since Apple has been a significant source of revenue for Qualcomm since the beginning of the smartphone boom, the tech giant’s decision to stop purchasing its processors makes Qualcomm feel abandoned. However, Qualcomm has made some substantial investments with the future in mind and is, of course, no obscurity in its industry.
Analysts said Qualcomm entered the market for AI chips somewhat late. Still, the business has a wealth of knowledge regarding the mobile market, which would be beneficial in reaching Qualcomm’s stated goal of “making on-device AI ubiquitous.”
The Parallella, sometimes referred to as the cheapest supercomputing machine accessible makes this company one of the more interesting ones on this list. The Epiphany, a 1024 core 64-bit microprocessor marketed as a “world first,” is the primary AI chip product from Adapteva.
The Darpa-funded business gathered more than $10 million in total funding and successfully executed a Kickstarter campaign for its Parallella product.
Mythic intends to execute its “AI Without Borders” philosophy on the world, beginning with data centers, after raising more than $40 million in funding. Because its system does mixed digital and analog computations inside flash arrays, which the business claims are a “completely new methodology,” it claims to have found a mechanism wherein deep neural networks no longer strongly influence conventional local AI.
Because of its small size and desktop computer speed GPU, it can conduct “huge parallel processing” while feeling almost weightless.
Samsung, which has surpassed Apple as the leading smartphone manufacturer and Intel as the largest chipmaker in the world, is attempting to break into hitherto untapped sectors.
The most recent iteration of Samsung’s Exynos microprocessor, which is made for long-term evolution, or LTE, communications networks, was released before the end of the previous year. According to Samsung, the new Exynos has increased on-device neural processing units.
Though it has long been one of Apple’s primary semiconductor suppliers, TSMC is not precisely a cocky business. Although it has a website and informs investors of its findings, it doesn’t discuss much of its real work.
Fortunately, news outlets like DigiTimes stay informed of events at the chipmaker and recently revealed that e-commerce behemoth Alibaba has hired TSMC and Global Unichip to construct an AI chip.
This is Huawei’s semiconductor division. Huawei is a telecom equipment maker that is currently the target of certain indirect trade embargos. Huawei is no longer allowed to conduct business in the US, and other European nations are now imitating the US.
In any case, HiSilicon’s AI chip technology is likely in its infancy. The company will need to step up its efforts if it hopes to counteract the rising number of supply restrictions that Huawei is subject to.
Without at least one mention of IBM, no list of this kind would be complete. As you might assume, IBM has immensely well-funded research and development efforts in many technologies connected to AI. Although the company’s much-discussed Watson AI uses regular processors rather than AI-specific ones, they are nevertheless solid.
IBM’s TrueNorth is likely in the category of specialist AI chips. The vast 5.4 billion transistors in AMD’s TrueNorth, referred to as a “neuromorphic chip” modeled after the human brain, may seem like a lot until you realize that Epyc has 19.2 billion.
Regarding the number of parts, Xilinx manufactures microprocessors with the most transistors. It claims that the Versal or Everest chipsets have 50 billion transistors. Versal is indeed referred to by Xilinx as an AI inference platform. The word “inference” describes the conclusions drawn from the enormous volumes of data machine learning and deep learning systems intake and process.
The complete Versal and Everest solutions include chips made by, or at least produced by, other firms. But Xilinx is likely among the first to provide the market with such high-power computing capabilities in standalone packages.
Despite not providing an AI chip per se, Via does deliver what it calls an “Edge AI Developer Kit,” which includes a Qualcomm processor and several other parts. Additionally, it allows us to mention a different kind of business.
Incorporating AI into all the other low-cost, small computer manufacturers, including Arduino and Raspberry Pi, is probably only a matter of time. A few already contain an AI chip. One of them, according to Geek, is Pine64.
One of the biggest consumer electronics manufacturers, LG is a titan that appears to be nimble. Its interest in robotics is proof of this, but many businesses are also preparing for the day when smart homes will allow for more intelligent machinery.
This website previously revealed that LG has unveiled the LG Neural Engine, a proprietary AI chip. According to the business, this move is part of a strategy to “accelerate the development of AI gadgets for the home.” However, it’s possible that LG would use the chips in its data centers and backroom systems even before they reach the edge devices.
More computational power is required to run virtual and augmented reality than any other application. A few years back, during the worldwide craze for the augmented reality game Pokémon, some of Google’s data center servers allegedly came to a grinding halt.
Therefore, integrating AI chips in the data center and the edge device is undoubtedly necessary for VR and AR. With its PowerVR GPU, Imagination sort of accomplishes it as well.
With more than $200 million in funding, this business is well equipped to create unique AI chips for its clients. SambaNova claims that it is developing hardware-plus-software solutions to “drive the next generation of AI computing” even though the firm is still in its early stages.
Alphabet, or Google, is one of the principal investors in the business. You’ll see that many large, well-known corporations are investing in cutting-edge new startups to prevent them from being disrupted.
This startup, which is reportedly quiet, was founded by a few former Google workers, including one or two who worked on the Tensor project.
Last year, Crunchbase revealed that the business had received $60 million to advance its theories, predicated on the assertion that the next “breakthrough in computation will be fueled by new, streamlined architectural approach to hardware and software,” as the startup puts it.
Robotics and Automation News has previously featured this business. Our YouTube channel features an interview with one of its senior executives who presented to us.
Kalray is essentially a well-funded European company that seems to have developed a cutting-edge semiconductor for AI processing in data centers and on-edge devices. According to the business, its approach enables several neural network layers to compute simultaneously while consuming little power.
Given that Amazon basically founded the cloud computing market with its Amazon Web Services business unit, and especially given the possibility that their integration would boost the efficiency of Amazon’s data centers, it makes logical for the company to enter the AI chip market.
Towards the end of last year, the biggest online retailer in the world announced its AWS Inferential AI processor. Even after its official introduction, it probably won’t be sold to other firms; instead, it will only be made available to those owned by the Amazon group of enterprises.
In 2015, Cerebras Systems was established. The business unveiled Cerebras WSE-2, an 850,000 core, and 2.6 trillion transistor AI chip model, in April 2021. Undoubtedly, the WSE-2 outperforms the WSE-1, which has 400,000 processing cores and 1.2 trillion transistors.
Because the WSE-1 technology works so well and speeds up genetic and genomic research, Celebra’s method is used by numerous pharmaceutical companies, including AstraZeneca and GlaxoSmithKline.
A customized Artificial Intelligence (AI) processor that gives edge devices the performance of a data center-class computer was created by Hailo, an Israel-based chipmaker focused on AI. Hailo’s AI processor reinvents conventional computer architecture to enable smart devices to do complex deep learning tasks like object detection and segmentation in real time with the least amount of power, space, and expense. The deep learning processor is made to fit into various intelligent machines and apparatus, impacting a wide range of industries, including automotive, industry 4.0, smart cities, smart homes, and retail. Its Hailo-8TM M.2 and Mini PCIe high-performance AI acceleration modules are supported.
By offering a fresh approach to AI chip design and usage, Anari AI is creating the AI hardware sector from the ground up. It pioneers reconfigurable AI, enabling users to quickly construct and deploy their own solutions and tailor their infrastructure with just one click. The first processor on the Anari platform, ThorX from Anari, offers 100x higher computing efficiency than GPU on 3D/Graph data structures.
Don’t forget to join our 16k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. Interested in partnering with us? feel free to email us at Asif@marktechpost.com.
Prathamesh Ingle is a Mechanical Engineer and works as a Data Analyst. He is also an AI practitioner and certified Data Scientist with an interest in applications of AI. He is enthusiastic about exploring new technologies and advancements with their real-life applications