AWS Launches Graviton3 Processors For Machine Learning Workloads

Amazon Web Services (AWS) announced the launch of the third generation of its AWS Graviton chip-powered instances, the AWS Graviton3, will power all-new Amazon Elastic Compute 2 (EC2) C7g instances, which are currently available in preview, three years after the original version of the processors was released.

According to AWS, the new Graviton3-powered instances will give up to 25% faster compute performance and 2x more excellent floating-point performance than the current generation of AWS EC2 C6g Graviton2-powered instances be unveiled at the AWS re:Invent 2021 conference in Las Vegas. According to AWS Graviton2 instances, the new Graviton3 instances are up to 2x quicker when performing cryptographic workloads compared to the business.

According to AWS, the new Graviton3-powered instances will give up to 3x more excellent performance for machine learning workloads than Graviton2-powered instances, including support for bfloat16.

The AWS Graviton chips are Arm-based 7nm processors custom-built for cloud workloads by Annapurna Labs, an Israeli engineering startup AWS bought roughly six years ago. The AWS Graviton2 processors were released in late 2019, a year after the original Graviton chips. Each vCPU in a Graviton processor has its own specialized cores and caches. AWS clients may choose from around 12 different Graviton2-powered instances right now.

The new offerings will better serve customers who need to run compute-intensive workloads such as HPC, batch processing, electronic design automation (EDA), media encoding, scientific modeling, ad serving, distributed analytics, and CPU-based machine learning inferencing, according to AWS’ chief evangelist.

Amazon also announced Trn1, a new instance for training deep learning models on the cloud, including models for image recognition, natural language processing, fraud detection, and forecasting, alongside Graviton3. It runs on Trainium, an Amazon-designed processor that the firm said last year would provide the most teraflops of any cloud machine learning instance. (A teraflop is a unit of measurement for a chip’s ability to do one trillion computations per second.)

Graviton3

Graviton3 is up to 25% quicker for general-compute tasks, with two times faster floating-point performance for scientific workloads, two times faster performance for cryptography workloads, and three times faster performance for machine learning workloads, according to AWS CEO Adam Selipsky. Furthermore, according to Selipsky, Graviton3 uses up to 60% less energy for the same performance as the previous version.

A new pointer authentication function in Graviton3 is aimed to increase overall security. Return addresses are signed with a secret key and extra context information, including the current value of the stack pointer, before being placed into the stack. Before being utilized, the signed addresses are verified after being popped off the stack. If the address is invalid, an exception is thrown, preventing attacks that work by overwriting the stack contents with the address of hazardous code.

Graviton3 processors, like previous versions, provide dedicated cores and caches for each virtual CPU, as well as cloud-based security capabilities. C7g instances will come in various sizes, including bare metal. Amazon claims they’ll be the first in the cloud industry to include DDR5 memory, up to 30Gbps network bandwidth and elastic fabric adapter support.

Trn1

Trn1, Amazon’s machine learning training instance, provides up to 800Gbps of networking and bandwidth, according to Selipsky, making it ideal for large-scale, multi-node distributed training use cases. Customers may train models with billions of parameters using up to tens of thousands of clusters of Trn1 instances.

Trn1 utilizes the same Neuron SDK as Inferentia, the company’s cloud-hosted chip for machine learning inference. It supports powerful frameworks like Google’s TensorFlow, Facebook’s PyTorch, and MxNet. Compared to ordinary AWS GPU instances, Amazon claims a 30% increase in throughput and a 45 percent reduction in cost-per-inference.

C7g and the I-family are the two new instances announced with Graviton 3 this week to help AWS customers improve the performance, cost, and energy efficiency of their workloads running on Amazon EC2.

Companies increasingly turn to AI for efficiency advantages as they confront economic challenges such as labor shortages and supply chain interruptions. According to a recent Algorithmia poll, 50% of businesses aim to increase their spending on AI and machine learning in 2021, with 20% indicating they will “substantially” increase their AI and ML expenditures. The adoption of AI is driving cloud growth, a trend that Amazon is well aware of, as seen by its continuing investments in technologies such as Graviton3 and Trn1.

References:

  • https://aws.amazon.com/blogs/aws/join-the-preview-amazon-ec2-c7g-instances-powered-by-new-aws-graviton3-processors/
  • https://venturebeat.com/2021/11/30/amazon-announces-graviton3-processors-for-ai-inferencing/
  • https://www.nextplatform.com/2021/12/02/aws-goes-wide-and-deep-with-graviton3-server-chip/
  • https://www.hpcwire.com/2021/12/01/aws-arm-based-graviton3-instances-now-in-preview/
  • https://www.infoq.com/news/2021/12/amazon-ec2-graviton3-arm/
  • https://www.theregister.com/2021/12/06/graviton_3_aws/