Amazon.com (AMZN.US) CEO Andy Jassy detailed the company's chip business, the expected returns from its $200 billion capital expenditure plan for 2026, and other related topics in his 2025 letter to shareholders. In the letter released on Thursday, Jassy stated, "Our plan to invest approximately $200 billion in capex in 2026 is not based on a hunch. Recent commitments, such as OpenAI's (over $100 billion) investment, serve as one example, alongside several customer agreements that are either finalized (but not yet public) or in advanced negotiations. We anticipate that the majority of the 2026 AWS capital expenditure will be monetized between 2027 and 2028, with a significant portion already backed by customer commitments."
During the company's Q4 earnings call in February, Jassy indicated that Amazon's total planned capital expenditure of around $200 billion would be primarily allocated to AWS. He added that the company is facing exceptionally high demand from customers wanting AWS to handle their core and AI workloads, stating, "We are deploying compute capacity and monetizing it as fast as we can."
Jassy noted in the letter that the annual revenue run rate for the company's chip business—encompassing Graviton, Trainium, and Nitro—has now surpassed $20 billion, with triple-digit year-over-year growth. He further elaborated, "This run rate is actually understated compared to other chip companies because we currently only monetize our chips through Elastic Compute Cloud (EC2). If our chip business were a standalone company and sold this year's chip production to AWS and other third-party customers like leading chip companies do, the annual run rate would be approximately $50 billion. Market demand for our chips is very strong, and it is highly probable that we will eventually sell complete racks of chips directly to third parties."
Jassy also reported that entering the third year of the AI wave, AWS's AI business achieved an annual revenue run rate exceeding $15 billion in Q1 2026 (nearly 260 times the size of AWS itself at the same point in its history) and is growing rapidly. Furthermore, he pointed out that at scale, Amazon expects its Trainium AI chips to save tens of billions of dollars in annual capital expenditures and provide a several-hundred-basis-point advantage in operating margin for inference tasks compared to relying on other vendors' chips. AI inference refers to the process of running a trained AI model to generate predictions or conclusions from new, unseen data.
Jassy mentioned that Trainium3 chips began shipping in early 2026, offering 30% to 40% better price-performance than Trainium2 and are already almost entirely pre-sold. "While Trainium4 is still about 18 months away from volume availability, a substantial portion has already been pre-booked. Additionally, the majority of inference workloads for Amazon Bedrock, AWS's primary and rapidly growing inference service, run on Trainium. Demand for Trainium is experiencing explosive growth," Jassy said.
Jassy also discussed the company's relationship and competitive dynamics with NVIDIA (NVDA.US). "We maintain a close partnership with NVIDIA; there will always be customers who choose to run on the NVIDIA platform, and we will continue to make AWS the best platform for running NVIDIA chips. However, customers are seeking better price-performance," he stated. He highlighted that the second-generation, in-house AI chip, Trainium2, offers approximately 30% better price-performance than comparable GPUs and is now essentially sold out.
Jassy revealed that AWS added 3.9 gigawatts (GW) of power capacity in 2025 and expects to double its total power capacity by the end of 2027, with this capacity being monetized rapidly once operational. He added that in Q4 2025, AWS achieved 24% year-over-year growth, reaching an annual revenue run rate of $142 billion. "This is substantial absolute growth. However, we still face capacity constraints, leaving some demand unmet. Incidentally, two large AWS customers have inquired about purchasing our entire 2026 Graviton instance capacity—Graviton is our widely adopted, in-house CPU chip—given demand from other customers, we could not accommodate these requests, but it gives you a sense of the demand level," Jassy remarked.
Comments