The "Cloud Exit": Why Major Tech Firms are Moving Their Data Back to Physical Servers in 2026

The “Cloud Exit”: Why Major Tech Firms are Moving Their Data Back to Physical Servers in 2026

Sharing is caring!

For years, the cloud seemed like the obvious future. Lower overhead, infinite flexibility, someone else’s problem to maintain. Hundreds of billions of dollars poured into AWS, Azure, and Google Cloud, and most enterprise IT teams simply followed the consensus. Now, in 2026, a growing number of those same organizations are quietly reversing course – pulling workloads out of the cloud and moving them back onto physical servers they own or lease directly. This isn’t a fringe movement anymore. It has a name – cloud repatriation – and the numbers behind it are harder to ignore every year.

The Numbers Tell a Surprising Story

The Numbers Tell a Surprising Story (Image Credits: Pixabay)
The Numbers Tell a Surprising Story (Image Credits: Pixabay)

Recent data reveals that 86% of CIOs plan to move some workloads from public cloud back to private cloud or on-premises infrastructure, marking the highest rate ever recorded. That figure would have seemed implausible five years ago, when “cloud-first” was the dominant mantra across the industry.

Barclays’ CIO survey shows 86% of organizations are moving at least some workloads back to private cloud or on-premises infrastructure – up from just 43% in late 2020. The speed of that shift is striking. What was once a niche technical debate has moved squarely into boardrooms.

According to the Flexera 2025 State of the Cloud Report, around 21% of workloads and data have already been repatriated, and roughly 70 to 80 percent of companies are repatriating at least some data annually, based on IDC research. The trend is not theoretical. It is happening at scale, right now.

Importantly, public cloud spending continues to grow, with Gartner forecasting $723.4 billion in worldwide public cloud expenditure for 2025, up 21% from 2024 – indicating that repatriation is strategic and selective rather than a mass exodus. The two trends coexist, which tells you something important about how infrastructure decisions are actually being made.

When Cloud Bills Stop Making Sense

When Cloud Bills Stop Making Sense (By Intel Free Press, CC BY 2.0)
When Cloud Bills Stop Making Sense (By Intel Free Press, CC BY 2.0)

Cost remains the primary driver behind cloud repatriation decisions. According to recent surveys, 84% of organizations cite managing costs as their biggest cloud challenge. What initially appeared as cost-effective pay-as-you-go pricing often evolves into a financial burden as usage scales.

Approximately 27% of cloud infrastructure spending is wasted on underused resources, according to Flexera’s 2025 report – driven by oversized instances, forgotten development environments, and underutilized capacity. That is a staggering amount of money being burned for nothing. For large organizations, it can easily represent tens of millions of dollars per year.

Many organizations discovered that steady, always-on workloads behaved very differently in production than they did in planning spreadsheets. Usage-based pricing worked well for elastic demand, but less so for predictable systems that run 24/7. Over time, the cost of cloud repatriation began to look smaller than the cost of staying put.

More than 43% of IT leaders found that moving applications and data from on-premises to the cloud was more expensive than expected. That gap between promise and reality is precisely what is pushing finance teams and CIOs toward the exit ramp. Predictability, it turns out, has real value.

The Hidden Cost of Leaving: Egress Fees

The Hidden Cost of Leaving: Egress Fees (By Victorgrigas, CC BY-SA 3.0)
The Hidden Cost of Leaving: Egress Fees (By Victorgrigas, CC BY-SA 3.0)

One of the primary concerns with public cloud environments is the cost of transferring data out, often referred to as data egress fees. This is one of the less-discussed aspects of cloud lock-in, and it catches many organizations off guard when they first consider moving away from a provider.

AWS charges $0.09 per gigabyte for the first 10 terabytes of egress. At petabyte scale, that translates to anywhere between $90,000 and $120,000 just to retrieve your own data. For companies sitting on hundreds of petabytes, those fees become a serious structural barrier to exit.

Public cloud pricing is designed to be low-friction at the start but scales aggressively. The hidden costs – egress fees and API request charges – are the real killers at volume. These costs rarely appear in early cloud migration business cases. They tend to show up later, buried in invoices that grow month by month.

37signals negotiated a $250,000 egress fee waiver from AWS as a goodwill gesture – proof that the penalty exists and that it is negotiable when the PR stakes are high enough. Most companies, of course, do not have that kind of leverage or visibility. They simply pay the bill.

Real Companies, Real Savings

Real Companies, Real Savings (Image Credits: Pixabay)
Real Companies, Real Savings (Image Credits: Pixabay)

Dropbox is one of the most high-profile examples of a company that left the cloud. The company moved 90% of its data from the cloud to its own data centers, claiming to save nearly $75 million over two years. This decision was driven by the need for greater control over their infrastructure and the realization that their specific requirements could be better met with on-premises solutions.

37signals moved its entire infrastructure off AWS in 2022. Initially projecting $7 million in savings over five years, they exceeded expectations with $2 million in annual savings. The company maintains that while cloud is ideal for startups, it becomes genuinely uneconomical for established companies with predictable, heavy workloads.

GEICO migrated more than 600 applications to Azure over a decade. The result was costs running 2.5 times higher than expected, with declining reliability. GEICO is now repatriating at least 50% of its workloads onto an OpenStack-based private cloud, with expected savings of 50% per compute core and 60% per gigabyte of storage.

SEO software firm Ahrefs calculated that running its equivalent hardware and workloads entirely within AWS’ Singapore region over two years would have cost an estimated $440 million, versus the $40 million it actually paid for 850 on-premises servers during that same period. Figures like those are difficult to argue with in a board meeting.

AI Is Reshaping the Calculus

AI Is Reshaping the Calculus (Self-photographed, CC BY-SA 3.0)
AI Is Reshaping the Calculus (Self-photographed, CC BY-SA 3.0)

Perhaps the most significant driver of cloud repatriation in 2025 and 2026 is artificial intelligence. AI workloads have fundamentally challenged traditional cloud economics, and for organizations running AI workloads consistently, building dedicated AI infrastructure on-premises or in colocation facilities often proves more cost-effective than paying premium public cloud rates. Additionally, hosting AI models privately provides greater control over training data, proprietary algorithms, and intellectual property.

The explosive growth of AI workloads in 2025 and 2026 created a new repatriation driver. Training and inference on GPU clusters in public cloud is expensive, and the data gravity problem – where moving terabytes of training data in and out of cloud regions becomes a bottleneck – pushes organizations toward co-located or on-premises GPU infrastructure.

Organizations using private AI applications are increasingly turning to on-premises infrastructure. These companies are running large language models or small language models on their own hardware, training or fine-tuning those AI models with their own private corporate data. Many are hesitant to share their language models with AI vendors at all.

Gartner predicts that 40% of enterprises will adopt hybrid compute architectures for mission-critical workflows by the end of 2026, up from just 8% in prior years. Much of this growth is driven by AI teams demanding predictable GPU access without cloud spot-instance volatility.

Security, Compliance, and the Sovereignty Problem

Security, Compliance, and the Sovereignty Problem (Image Credits: Unsplash)
Security, Compliance, and the Sovereignty Problem (Image Credits: Unsplash)

In 2026, data is not just an asset – it is a liability if managed incorrectly. Regulations like GDPR and various national data protection laws are forcing companies to know exactly where their data physically sits. Under the US Cloud Act, data hosted with US-owned hyperscalers – even in their European data centers – can theoretically be accessed by US law enforcement.

Around one third of organizations cite security issues as a motivator for cloud repatriation. Despite the security measures implemented by cloud providers, many businesses feel more secure managing their own infrastructure. This sense of control is particularly crucial for industries dealing with sensitive information, such as finance and healthcare.

Regulatory pressure is no longer a theoretical concern – it is an active enforcement reality that compels organizations to demonstrate infrastructure control, not just contractual assurances. The EU’s Digital Operational Resilience Act (DORA) is now enforceable and targets financial institutions specifically.

A 2025 survey indicates that 97% of mid-market organizations plan to move workloads off public clouds for better data sovereignty. That near-universal sentiment in the mid-market segment shows just how broad the compliance concern has become, well beyond large enterprises.

Performance and Latency: The Technical Case

Performance and Latency: The Technical Case (Image Credits: Unsplash)
Performance and Latency: The Technical Case (Image Credits: Unsplash)

Applications requiring real-time processing or low-latency responses – such as financial trading platforms, industrial automation systems, and high-frequency data processing – often perform better on dedicated infrastructure closer to end users. The physics of the internet simply cannot be negotiated away by a service-level agreement.

Latency-sensitive applications may not meet performance expectations in public cloud environments to the same degree they might on-premise or in colocation sites. For some workloads, that performance gap is a minor inconvenience. For others – real-time trading, surgical robotics, autonomous vehicle systems – it is simply not acceptable.

For workloads tied to physical processes – manufacturing systems, trading platforms, logistics coordination – latency variance creates real operational risk. Cloud regions have improved, but distance still matters. In these cases, repatriation is less about saving money and more about protecting service quality and customer experience.

By moving to a dedicated server or private cluster, organizations eliminate the “noisy neighbor” effect. Resources are physically theirs. If another customer on the network comes under attack, the isolated bare metal environment remains stable. For applications where uptime is revenue, that kind of isolation carries significant value.

The Vendor Lock-In Trap

The Vendor Lock-In Trap (Image Credits: Unsplash)
The Vendor Lock-In Trap (Image Credits: Unsplash)

Many organizations discovered that exiting a cloud provider was harder than expected – not because of compute, but because of deeply embedded proprietary services. Databases, analytics engines, AI platforms, and identity services created tight coupling that raised both cost and risk. This is a problem that tends to compound quietly over years, becoming visible only when a company decides it wants to leave.

Some companies fear over-reliance on hyperscalers like AWS, Azure, or Google Cloud and are actively seeking to diversify their infrastructure investments. The logic is straightforward: a supplier with no competition has little incentive to keep prices stable. Having an alternative – even a partial one – restores negotiating leverage.

For many, avoiding cloud vendor lock-in became a design principle rather than a future concern. Cloud repatriation, even for a subset of systems, restored leverage. That renewed leverage often translates into better contract terms and more predictable pricing from the cloud provider – even for workloads that stay in the cloud.

Organizations are moving past the “cloud at all costs” mindset toward infrastructure pragmatism – placing each workload where it runs most efficiently, securely, and cost-effectively. That kind of thinking would have been considered backward-looking in 2018. In 2026, it is simply good engineering.

Hybrid Is the New Default

Hybrid Is the New Default (Image Credits: Unsplash)
Hybrid Is the New Default (Image Credits: Unsplash)

The binary choice between cloud and on-premises is officially dead. In 2026, hybrid architecture is becoming the default standard, with Gartner predicting that 40% of enterprises will adopt hybrid compute architectures in mission-critical workflows. The most sophisticated organizations are no longer picking sides – they are building systems that can flex between both environments.

Many companies are adopting a hybrid cloud strategy – keeping their frontend on the public cloud for global distribution while moving their heavy databases and backend processing to dedicated servers to reduce costs. That split makes practical sense. Public cloud excels at handling burst traffic and global reach. Physical servers excel at steady, predictable compute.

Most repatriating organizations retain 40 to 70 percent of their workloads in the cloud, which is an important detail that often gets lost in the narrative. This is not about abandoning cloud services wholesale. It is about being deliberate rather than reflexive.

“Cloud-first” has quietly evolved into “cloud-appropriate.” For CTOs, this shift has made cloud repatriation a legitimate and defensible tool – one that strengthens negotiating power, improves operational clarity, and restores balance to infrastructure strategy. It took about a decade, but the industry is finally having the nuanced conversation it should have had from the beginning.

The Road Back Is Not Always Easy

The Road Back Is Not Always Easy (Image Credits: Unsplash)
The Road Back Is Not Always Easy (Image Credits: Unsplash)

Repatriation is not as simple as lifting and shifting workloads back to on-premises infrastructure. Purchasing hardware, networking equipment, and software licensing requires significant capital expenditure. Companies must also budget for power, cooling, and ongoing facility maintenance costs.

IT teams often need to refactor applications to work in on-premises environments, requiring substantial code rewrites and new automation frameworks. Hiring and retaining skilled on-premises operations engineers adds to staffing costs. Repatriation also introduces operational risks, including unexpected outages and performance degradation during the transition.

Repatriation works best for stable workloads. If traffic spikes unpredictably by thousands of percent for brief periods, public cloud may still be the better option. For consistent, predictable traffic – streaming services, databases, SaaS applications – dedicated infrastructure typically wins.

Cloud repatriation is not a default fix. It is a placement decision that must earn its place. Organizations that treat it as a blanket solution often find the costs and complications exceed their projections. Done carefully, with a clear-eyed view of total cost of ownership, the savings are real. Done hastily, it can simply trade one set of problems for another.

What This Means Going Forward

What This Means Going Forward (Image Credits: Pexels)
What This Means Going Forward (Image Credits: Pexels)

By mid-2025, repatriation is no longer a fringe idea but a strategic imperative. Analysts predict it could become the defining infrastructure term of 2026, with AI acceleration and geopolitical risks amplifying the shift. The story is still being written, but the direction is clear.

Cloud repatriation will accelerate through 2026, but as a sign of cloud maturity, not abandonment. Organizations are selectively moving workloads back for AI economics, cost control, compliance, and sustainability, while public cloud spending continues to grow. The future is balanced hybrid and multicloud architectures, where each workload runs where it delivers the most value.

Organizations often avoid publicizing their repatriation moves to prevent vendor backlash – which means the real scale of this shift is likely larger than the already-impressive public numbers suggest. The decisions are being made quietly, in budget reviews and infrastructure audits, one workload at a time.

What the cloud exit ultimately reflects is not disillusionment with technology, but something more mature: the recognition that no single platform is optimal for every use case. The companies getting this right are not asking whether to use the cloud. They are asking which problems the cloud is actually the best tool to solve – and being honest when the answer is “not this one.”

About the author
Lucas Hayes

Leave a Comment