
Cloud Wars: How AI ‘Landlords’ Amazon And Microsoft May Absorb OpenAI And Anthropic – Image for illustrative purposes only (Image credits: Pixabay)
Major cloud providers have poured tens of billions into leading AI developers in recent months, forging partnerships that bind startups to their infrastructure. These deals, announced in April 2026, highlight a growing reliance on hyperscalers for the compute power essential to train frontier models.[1][2] As AI labs scale amid surging demand, the question arises whether Amazon Web Services and Microsoft Azure will evolve from supporters to dominant forces in the ecosystem.
The shift matters now because infrastructure constraints – from chip fabrication to power supply – have turned compute into a scarce resource. Early commitments lock in advantages, potentially tilting control toward those controlling the servers.[3]
Amazon Doubles Down on Anthropic with Landmark Commitment
Amazon announced on April 20, 2026, an investment of up to $25 billion in Anthropic, bringing its total backing to approximately $33 billion.[1] The structure included an initial $5 billion infusion, with the remainder contingent on commercial milestones reached by the AI firm, valued at $380 billion.
In return, Anthropic pledged more than $100 billion in spending on AWS technologies over the next decade. This encompassed commitments to use Amazon’s Trainium chips for training its Claude models and securing up to 5 gigawatts of capacity.[3] Amazon CEO Andy Jassy noted that Anthropic’s dedication to AWS Trainium for the next decade underscored progress in custom silicon development.[1]
Anthropic CEO Dario Amodei emphasized the need to match user demand for Claude, stating the collaboration would advance research while serving over 100,000 AWS builders.[1] The firm had already named AWS its primary cloud provider in 2023 and primary training partner in 2024.
Microsoft’s Evolving Grip on OpenAI
Microsoft’s partnership with OpenAI, which began in 2019 with a $1 billion investment, has grown into a multifaceted alliance involving equity stakes, IP access, and revenue sharing.[2] The company holds rights to integrate OpenAI models into its products and resell them via Azure until 2032.
Recent adjustments in late April 2026 ended cloud exclusivity, allowing OpenAI to deploy models across providers like Amazon and Google Cloud.[4] Microsoft retained non-exclusive IP licenses for non-research products and a capped 20% revenue share through 2030. OpenAI must prioritize Azure for initial shipments.
These changes followed tensions, including OpenAI’s February 2026 deal with Amazon worth up to $50 billion. The revised terms carved out certain research IP from Microsoft’s access, introducing flexibility amid competitive pressures.[2]
Lock-In Through Contracts and Compute
The deals reveal distinct paths to interdependence. Microsoft secured economic interests and deep integration with OpenAI, positioning itself beyond a mere supplier.[2] Amazon, by contrast, focused on infrastructure provision and distribution via Bedrock, without comparable IP transfers from Anthropic.
Technical lock-in proves harder to escape. Training pipelines optimized for specific hardware – like AWS Trainium or Azure stacks – incur high migration costs. Anthropic diversified somewhat, incorporating Google TPUs and NVIDIA GPUs alongside AWS, yet remains tied through massive commitments.[2]
What matters now: Hyperscalers shape model efficiency on their platforms, turning suppliers into integral components of AI products.
Supply bottlenecks exacerbate this: Data centers take 18-24 months to build, custom chips span multi-year cycles, and power demands rival nuclear plants.[3]
Amazon’s Bold Move into OpenAI Territory
Capitalizing on OpenAI’s newfound multi-cloud freedom, Amazon integrated GPT-5.4 and upcoming GPT-5.5 models into Bedrock shortly after the Microsoft revisions.[5] This allows AWS users seamless access alongside Anthropic and other models, unified under Bedrock’s security framework.
AWS also launched Bedrock Managed Agents, leveraging OpenAI for tool execution and context management, plus tools like Amazon Quick Desktop for productivity tasks.[5] AWS VP Anthony Liguori highlighted the ease for customers to adopt without new development.
These steps signal a platform war where model choice commoditizes, but governance and deployment differentiate leaders.
Future Fault Lines in the AI Ecosystem
As AI revenue surges – Anthropic’s run rate hit $30 billion annualized by April 2026 – frontier labs face existential needs for compute amid three-to-five-year supply lags.[3] Cloud giants, with $200 billion annual capex like Amazon’s, hold the keys.
While formal absorption remains speculative, the trajectory favors hyperscalers. OpenAI gains distribution breadth; Anthropic preserves some optionality. Yet embedded dependencies suggest cloud providers will increasingly dictate the pace of innovation. The real stakes lie not in model superiority alone, but in who controls the infrastructure beneath it.[2]