One of today’s most prominent technological challenges is to address computer network security. There’s a need to build a platform of cutting-edge computing capable of handling the complex challenges that lie ahead as we look to a future characterized by Artificial Intelligence (AI).
Arm introduces Armv9 to address the need for increasingly capable security and artificial intelligence. Armv9 provides three main advantages over the previous architecture: stability, improved AI efficiency, and overall faster performance. Its microarchitecture will support artificial intelligence and machine learning while also adding security features. The ubiquity and diversity of AI workloads necessitate more advanced and varied solutions. The Arm Confidential Compute Architecture (CCA) is a new technology concept implemented in Arm v9. CCA introduces Realms which are separate execution environments that are containerized and shielded from the operating system and hypervisor. Using Realms, commercially sensitive data and code can be protected from the rest of the device when in operation, rest, and transit.
The first aspects of the Armv9 architecture were revealed earlier this week, with more to come later this year. From an architectural standpoint, v9 isn’t likely to be as significant a leap forward as v8 was over v7, when the company first implemented 64-bit instructions with the AArch64 instruction set, as well as a revamped execution mode.
The first of Armv9’s three significant changes is stability. The new Arm Confidential Compute Architecture (CCA) aims to safeguard sensitive data in a hardware-based environment. These so-called “Realms” can be generated dynamically to shield critical data and code from the rest of the system.
And there’s AI processing. Scalable Vector Extension 2 (SVE2), a platform intended to aid machine learning and optical signal processing activities, will be used in Armv9. Anything from 5G systems to virtual and augmented reality and machine learning workloads like image processing and speech recognition should benefit from this. It will apply the AI capabilities of its hardware to the CPU, Mali GPUs, and Ethos NPUs in the coming years. These kinds of AI implementations are said to be a vital factor why Nvidia is in the process of purchasing Arm for $40 billion.
Scalable Vector Extensions, or SVE, were first implemented in 2016. Fujitsu was the first licensee to bring them to use, incorporating them into its A64FX CPU, which drives the world’s fastest (for now) supercomputer, Fukagu, in Japan. The first version of SVE was missing some SIMD instructions or Single instruction, multiple data instructions. SVE 1 was perfect for high-performance computing, but it had little impact on non-HPC workloads. SVE2 aims to provide more complementary scalable SIMD instructions for DSP and machine learning (ML) workloads.
Apart from these more basic enhancements, Armv9 promises more general performance updates. Over the next two generations, it expects CPU capacity to improve by more than 30%, with additional performance gains coming from software and hardware optimizations, not only in mobile CPUs but also in server-based processors such as AWS Graviton and Ampere’s Altra application processors.