Akamai Technologies revealed a historic $1.8 billion, seven-year cloud infrastructure contract with AI pioneer Anthropic, igniting a 27% jump in its stock—the biggest rally since its 28-year inception. This deal marks Akamai’s bold pivot from content delivery to AI-focused cloud services amid accelerating demand for high-performance compute.
- Akamai secures largest-ever $1.8B AI cloud contract with Anthropic.
- Cloud infrastructure revenue surges 40% YoY amid CDN decline.
- Deal validates Akamai’s strategic shift towards AI compute services.
Market signal
The $1.8 billion deal with Anthropic represents a major milestone for Akamai, affirming its transition into the AI cloud infrastructure sector. This contract alone drives newly promised revenues and marks the emergence of Akamai as a credible supplier of cutting-edge GPU compute resources required for large-scale AI model development and deployment. The seven-year duration provides long-term revenue visibility uncommon in Akamai’s traditional content delivery business.
This signal emphasizes the increasing compute demand from frontier AI companies that outstrips capacity available from any single provider. Anthropic’s commitment to Akamai highlights the critical need for diversified infrastructure sources in the AI market to meet rapid growth in AI workload scale and complexity. The infusion of over 220,000 NVIDIA GPUs across key contracts indicates a technological arms race in cloud AI capabilities.
Operator impact
Operators and cloud infrastructure buyers should note Akamai’s evolving portfolio, pivoting from diminishing revenue streams in its legacy CDN business towards expansive AI compute services. The success of its AI infrastructure offerings, marked by rapid cloud revenue growth of 40% year-over-year, demonstrates that traditional network and content delivery architecture companies can adapt to serve AI workloads with differentiated value propositions.
Akamai’s partnership with Anthropic also underscores the importance of securing multi-year commitments to stabilize revenue amid competitive cloud markets. For operators evaluating providers, Akamai’s expanded GPU capacity and expertise in global-scale deployment of AI inference workloads present an emerging option alongside major hyperscalers, especially when addressing latency-sensitive AI applications.
What to watch next
Future developments will reveal whether Akamai can expand beyond this foundational Anthropic contract into a broader customer base and avoid concentration risks inherent to relying on a few major clients. Monitoring Akamai’s pipeline of enterprise cloud customers, especially those with substantial AI compute needs, will be key to assess the sustainability of its AI cloud revenue growth trajectory.
Additionally, observe how Akamai integrates new GPU technologies and scales its AI Grid infrastructure globally to reduce latency and deliver cost efficiencies. Operator customers should track this progress carefully as it may influence future decisions about sourcing AI infrastructure and the balance between established hyperscalers and emergent specialty providers like Akamai.