Anthropic has signed a new agreement with Amazon to expand its use of Amazon Web Services (AWS), securing up to 5 gigawatts (GW) of computing capacity to train and run its Claude AI models. The deal includes a long term infrastructure commitment of over $100 billion, along with deeper integration of Claude within AWS as demand for AI services continues to grow.
As part of the agreement, Amazon is investing $5 billion in Anthropic, with plans to invest up to an additional $20 billion over time. This builds on the $8 billion it has already committed, significantly increasing its total investment in the company.
The partnership, which began in 2023, has developed into a large-scale infrastructure collaboration. More than 100,000 customers already use Claude through Amazon Bedrock, and both companies have built a major compute cluster under Project Rainier using over a million Trainium chips.
The latest agreement expands both scale and integration. Anthropic is moving beyond using AWS only as a hosting platform and is embedding the Claude platform directly within AWS. This allows enterprises to access it through their existing accounts, controls, and billing systems, reducing operational complexity.
On the infrastructure side, Amazon’s custom chips are becoming central to Anthropic’s strategy. The agreement includes multiple generations of Trainium chips along with Graviton processors. New Trainium2 capacity is expected in the near term, with larger scale Trainium3 capacity planned later, and close to 1 gigawatt of combined capacity targeted by the end of 2026.
This shift also reflects a focus on cost and efficiency. Custom silicon reduces reliance on traditional GPU suppliers while providing more predictable and scalable computing resources. The expansion comes as demand for Claude continues to increase. Anthropic says its revenue run rate has crossed $30 billion in 2026, up from around $9 billion at the end of 2025. At the same time, higher usage has begun to affect performance, particularly during peak periods.
The additional capacity is expected to address these constraints, with more computing resources coming online in the near term and further expansion planned in regions such as Asia and Europe. This reflects a broader trend where AI adoption is becoming global and infrastructure needs to scale accordingly.
Claude also remains available across multiple cloud platforms, including AAWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry). This allows Anthropic to expand its AWS partnership while still maintaining flexibility across providers.
Andy Jassy, CEO of Amazon, said, “Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand, Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”
Dario Amodei, CEO and co-founder of Anthropic, said, “Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand, Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”






