Meta strikes deal to purchase millions of Amazon AI CPUs in latest chip shift
Meta signs a deal to acquire millions of Amazon AI CPUs, highlighting rising demand for AI hardware and shifting strategies in the chip market.
Meta has agreed with Amazon to use millions of AWS Graviton chips to support its expanding artificial intelligence workloads, as announced by Amazon on Friday. The move represents another significant win for Amazon’s in-house silicon efforts.
The AWS Graviton processor is built on the ARM architecture and functions as a CPU — meaning it is designed for general-purpose computing — rather than a GPU, which is typically optimised for graphics and large-scale model training.
Although GPUs continue to dominate in training large AI models, the growing use of AI agents is driving a shift in computing needs. Once models are trained, these agents generate demanding workloads that include real-time reasoning, coding, search functions, and the orchestration of multi-step processes. According to Amazon, the latest generation of Graviton chips has been engineered to handle these AI-driven workloads efficiently.
This agreement also redirects a portion of Meta’s spending back toward AWS, instead of competing cloud providers such as Google Cloud. In August of last year, Meta signed a six-year agreement valued at $10 billion with Google Cloud, although AWS had long been one of its primary cloud partners alongside Microsoft Azure.
The timing of Amazon’s announcement stood out, as it coincided with the conclusion of the Google Cloud Next conference. This moment appeared to underline the competitive dynamic between the two cloud providers. Google, for its part, also develops its own custom AI chips and revealed updated versions during the event.
Amazon has its own AI-focused GPU, known as Trainium. Despite its name suggesting a focus on training, Trainium is used for both training and inference — the phase that follows training, when models actively process inputs and generate outputs.
However, Anthropic recently secured a major commitment involving those chips. Earlier this month, the company behind the Claude AI models agreed to spend $100 billion over 10 years to run workloads on AWS, with a strong emphasis on Trainium. As part of that arrangement, Amazon committed an additional $5 billion investment into Anthropic, bringing its total investment in the company to $13 billion.
The agreement with Meta provides Amazon with an opportunity to highlight a high-profile AI customer using its internally developed CPUs. These processors compete with alternatives such as Nvidia’s Vera CPU, which is also ARM-based and tailored for AI agent workloads. A key distinction is that Nvidia sells its chips and systems broadly to enterprises and cloud providers — including AWS — while AWS offers its chips exclusively through its own cloud platform.
Earlier this month, Amazon CEO Andy Jassy used his annual shareholder letter to critique competitors such as Nvidia and Intel, stating that businesses are increasingly seeking improved price-to-performance efficiency for AI infrastructure. He emphasised Amazon’s intention to compete aggressively on that front. The strategy also places significant expectations on Amazon’s internal chip development teams, which continue to play a central role in the company’s broader AI ambitions.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0