Skip to main content

2026 Data Center Predictions

APMdigest's Predictions Series continues with 2026 Data Center Predictions — industry experts offer predictions on how data centers will evolve and impact business in 2026. 

OBSERVABILITY MAXIMIZES ROI

The GPU Reckoning — Efficiency Becomes the New Arms Race: The Idle GPU Epidemic will ignite an industry-wide awakening in 2026. The question will no longer be how much compute you own but how intelligently you orchestrate it. Enterprises will use AI-first observability to maximize ROI from every watt, workload, and chip. The winners will transform underused data centers into self-optimizing ecosystems that drive autonomous growth and regenerative impact.
Karthik Sj
GM of AI, LogicMonitor

DATA CENTER VS. CLOUD

Ditching the cloud, moving data back to data centers: In 2026, enterprises will begin migrating select workloads and sensitive data from the public cloud back into their own data centers. The "trillion-dollar paradox," as Andreessen Horowitz described it, is forcing business leaders to face a hard truth: the cloud's convenience often hides long-term cost and control tradeoffs. The agility that once justified the cloud premium has become a drag on profitability. We will see more organizations move back to the data center because of the fear that the data entered into the cloud will be consumed by public LLMs. A number of organizations have private LLMs to do their AI work on-premises.

Customers want tighter control over sensitive data and less exposure to cloud outages or the risk that public large language models will ingest proprietary information. The next phase of cloud adoption will look more balanced. Companies will keep what makes sense in the cloud and bring home the workloads that do not. Many will take a hard look at what they are paying for and what they gain in return, then move critical systems back into environments they can fully control. This shift will create more hybrid models that help organizations cut waste, tighten security, and make more informed decisions about where to store their most sensitive data based on cost, performance, and regulatory needs. 
John Kindervag
Chief Evangelist, Illumio

DATA CENTER GEOGRAPHY

Data center geography will become a strategic advantage as operators prioritize locations with abundant, cost-efficient energy and reliable cooling capacity.
Matt Kelly
CTO and VP of Technology Solutions, Global Electronics Association

Location Still Matters — Power, Proximity, and the Next Generation of AI Campuses: In 2026, location strategy will once again define the winners in the data center race. While massive campuses are emerging in what some call "the middle of nowhere," proximity to both power and population centers is becoming increasingly complex — yet critical. The evolving rules around AI training and inference are putting new pressure on latency, making speed a deciding factor much like it was in the early days of search engines. I expect continued growth in regions like Phoenix, Northern Virginia, and the Northeast, where proximity to key markets still matters. I also expect a surge of development in adjacent, power-rich areas like Wisconsin and Indiana which are close enough to connect, but scalable enough to host gigawatt-class facilities spanning 500+ acres.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

MODULAR SOLUTIONS AND TEMPORARY BUILDINGS

Use of temporary buildings and modular solutions will be vital to achieve maximum "time to power" for data center deployments. We've reached a point where data centers can't be built fast enough. To jumpstart deployments and ensure available power is utilized immediately, we'll see more organizations turn to modular data centers and temporary buildings, such as Microsoft's use of tents, for data center operations until a permanent on-site facility is completed. 
Kevin Roof
Director of Offer and Capture Management, LiquidStack

DATA CENTER POWER: THE ENERGY WALL

In 2026, our industry will finally move out of the hype era and into an age of clarity, where AI will finally deliver actual ROI and outcomes. And with that shift, regulation will tighten up. I believe that guardrails will be a good thing as they further force accountability. But the real change isn't around rules, it's about energy. While the energy issue became apparent this year, AI will run headfirst into the energy wall next year. Data centers are already straining grids, and the chase for even larger models will hit physical limits. The next race won't be for the biggest model or the most GPUs, it'll be centered on performance per watt. Efficiency will be the new barometer, and the companies that can deliver powerful AI at a fraction of today's energy cost will be the ones that remain on top.
Jason Williamson
CEO, MythWorx

DATA CENTER POWER: GRID RELIABILITY AND COST STABILIZATION

Data Centers Take a Leading Role in Enhancing Grid Reliability and Cost Stabilization: In 2026, data centers will play a more active role in stabilizing the grid and mitigating cost increases by securing strategic investment and promoting load flexibility via load shedding or curtailment. While increasing both grid utilization and revenue to utilities can help reduce costs for ratepayers, new investment and supply to the grid via storage and on-site generation will help data centers drive grid expansion and modernization for diverse uses. Additionally, when utilities are rewarded for collaboration rather than protecting reserved capacity, data centers are poised to become a key stabilizing force in the energy transition.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER POWER: NATURAL GAS

The challenge is no longer finding land — it's securing power. The "powered land" heyday of the last 5-10 years is increasingly over, with interconnection and grid upgrade costs now materially exceeding land value. As the grid struggles to keep pace, natural gas will continue to serve as a crucial bridge to sustainable baseload solutions like geothermal and new nuclear. The new geography of AI infrastructure will be defined not just by space, but by speed and power.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER COOLING: LIQUID COOLING

AI-HPC's power and thermal requirements will outgrow today's data-center designs, making liquid cooling mainstream and forcing a fundamental rethink of power delivery.
Matt Kelly
CTO and VP of Technology Solutions, Global Electronics Association

DATA CENTER COOLING: MODULAR LIQUID COOLING

Modularity will be key to scaling liquid cooling in AI data centers. As AI workloads continue to drive power densities ever higher, data center operators will seek out more powerful, modular liquid cooling systems that can be easily deployed and scaled incrementally as thermal regulation needs grow. By late 2026, expect to see skidded, modular units starting at 2MW (and reaching well beyond) become the de facto models for high-density data center builds. 
Angela Taylor
Chief of Staff & Head of Strategy, LiquidStack

DATA CENTER COOLING: TWO-PHASE DIRECT-TO-CHIP COOLING

A wave of two-phase direct-to-chip cooling solutions will be announced. Two-phase direct-to-chip cooling technologies will become the successor to today's one-phase liquid cooling systems as rack densities climb up to and beyond one megawatt. Similar to the surge around modular liquid cooling systems in the second half of 2025, two-phase liquid cooling technologies will be announced in 2026. The cooling ecosystem will begin coalescing around the supply chain and standards needed to scale when the transition to two-phase direct-to-chip liquid cooling officially begins—which will likely happen in 2027.
Angela Taylor
Chief of Staff & Head of Strategy, LiquidStack

DATA CENTER INVESTMENT: CAPITAL STORM

2026's Data Center Capital Storm - AI and Compute Drive a New Era of Investment: The scale of investment required to support AI's growth is unlike anything we've seen before, and 2026 will mark the beginning of an unprecedented capital storm in data center infrastructure. The expected 15 gigawatts of US data center leasing activity in 2025 alone demands roughly $150 billion in infrastructure funding — before even accounting for the $600 billion in chips that will power those facilities. With interest rates expected to move meaningfully lower, a massive wave of capital — both equity and debt — will unlock, accelerating projects across every major hyperscale and colocation market. That's only the beginning. Leasing is projected to jump another 25% in 2026, pushing total capacity demand toward 19 GW and setting another record. The trajectory of compute demand is staggering as training runs that consume 150-200 MW today could reach multi-gigawatt scales by 2030. Additionally, US AI power requirements will potentially balloon from 5 GW today to 30-70 GW by 2030. Every year, the industry breaks new records, and in 2026, the flood of capital chasing AI infrastructure will redefine the boundaries of scale.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER INVESTMENT: TOKENS PER WATT PER DOLLAR

Data center investors and operators will trade in the classic PUE metric for "tokens per watt per dollar." Infrastructure buildout is beginning to shift the economics of AI, with data centers transitioning from cost centers to revenue generators. With this transition, metrics for success are shifting from sustainability and conventional efficiency toward revenue generation. The new, top-of-mind metric discussed in industry circles is "tokens per watt per dollar." This new focus means it is no longer about simply using less energy, but about using energy as efficiently as possible. Since power constraints are the threshold preventing data center growth, organizations must use the power they have most effectively. Stranded power represents lost revenue.
Kevin Roof
Director of Offer and Capture Management, LiquidStack

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

2026 Data Center Predictions

APMdigest's Predictions Series continues with 2026 Data Center Predictions — industry experts offer predictions on how data centers will evolve and impact business in 2026. 

OBSERVABILITY MAXIMIZES ROI

The GPU Reckoning — Efficiency Becomes the New Arms Race: The Idle GPU Epidemic will ignite an industry-wide awakening in 2026. The question will no longer be how much compute you own but how intelligently you orchestrate it. Enterprises will use AI-first observability to maximize ROI from every watt, workload, and chip. The winners will transform underused data centers into self-optimizing ecosystems that drive autonomous growth and regenerative impact.
Karthik Sj
GM of AI, LogicMonitor

DATA CENTER VS. CLOUD

Ditching the cloud, moving data back to data centers: In 2026, enterprises will begin migrating select workloads and sensitive data from the public cloud back into their own data centers. The "trillion-dollar paradox," as Andreessen Horowitz described it, is forcing business leaders to face a hard truth: the cloud's convenience often hides long-term cost and control tradeoffs. The agility that once justified the cloud premium has become a drag on profitability. We will see more organizations move back to the data center because of the fear that the data entered into the cloud will be consumed by public LLMs. A number of organizations have private LLMs to do their AI work on-premises.

Customers want tighter control over sensitive data and less exposure to cloud outages or the risk that public large language models will ingest proprietary information. The next phase of cloud adoption will look more balanced. Companies will keep what makes sense in the cloud and bring home the workloads that do not. Many will take a hard look at what they are paying for and what they gain in return, then move critical systems back into environments they can fully control. This shift will create more hybrid models that help organizations cut waste, tighten security, and make more informed decisions about where to store their most sensitive data based on cost, performance, and regulatory needs. 
John Kindervag
Chief Evangelist, Illumio

DATA CENTER GEOGRAPHY

Data center geography will become a strategic advantage as operators prioritize locations with abundant, cost-efficient energy and reliable cooling capacity.
Matt Kelly
CTO and VP of Technology Solutions, Global Electronics Association

Location Still Matters — Power, Proximity, and the Next Generation of AI Campuses: In 2026, location strategy will once again define the winners in the data center race. While massive campuses are emerging in what some call "the middle of nowhere," proximity to both power and population centers is becoming increasingly complex — yet critical. The evolving rules around AI training and inference are putting new pressure on latency, making speed a deciding factor much like it was in the early days of search engines. I expect continued growth in regions like Phoenix, Northern Virginia, and the Northeast, where proximity to key markets still matters. I also expect a surge of development in adjacent, power-rich areas like Wisconsin and Indiana which are close enough to connect, but scalable enough to host gigawatt-class facilities spanning 500+ acres.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

MODULAR SOLUTIONS AND TEMPORARY BUILDINGS

Use of temporary buildings and modular solutions will be vital to achieve maximum "time to power" for data center deployments. We've reached a point where data centers can't be built fast enough. To jumpstart deployments and ensure available power is utilized immediately, we'll see more organizations turn to modular data centers and temporary buildings, such as Microsoft's use of tents, for data center operations until a permanent on-site facility is completed. 
Kevin Roof
Director of Offer and Capture Management, LiquidStack

DATA CENTER POWER: THE ENERGY WALL

In 2026, our industry will finally move out of the hype era and into an age of clarity, where AI will finally deliver actual ROI and outcomes. And with that shift, regulation will tighten up. I believe that guardrails will be a good thing as they further force accountability. But the real change isn't around rules, it's about energy. While the energy issue became apparent this year, AI will run headfirst into the energy wall next year. Data centers are already straining grids, and the chase for even larger models will hit physical limits. The next race won't be for the biggest model or the most GPUs, it'll be centered on performance per watt. Efficiency will be the new barometer, and the companies that can deliver powerful AI at a fraction of today's energy cost will be the ones that remain on top.
Jason Williamson
CEO, MythWorx

DATA CENTER POWER: GRID RELIABILITY AND COST STABILIZATION

Data Centers Take a Leading Role in Enhancing Grid Reliability and Cost Stabilization: In 2026, data centers will play a more active role in stabilizing the grid and mitigating cost increases by securing strategic investment and promoting load flexibility via load shedding or curtailment. While increasing both grid utilization and revenue to utilities can help reduce costs for ratepayers, new investment and supply to the grid via storage and on-site generation will help data centers drive grid expansion and modernization for diverse uses. Additionally, when utilities are rewarded for collaboration rather than protecting reserved capacity, data centers are poised to become a key stabilizing force in the energy transition.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER POWER: NATURAL GAS

The challenge is no longer finding land — it's securing power. The "powered land" heyday of the last 5-10 years is increasingly over, with interconnection and grid upgrade costs now materially exceeding land value. As the grid struggles to keep pace, natural gas will continue to serve as a crucial bridge to sustainable baseload solutions like geothermal and new nuclear. The new geography of AI infrastructure will be defined not just by space, but by speed and power.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER COOLING: LIQUID COOLING

AI-HPC's power and thermal requirements will outgrow today's data-center designs, making liquid cooling mainstream and forcing a fundamental rethink of power delivery.
Matt Kelly
CTO and VP of Technology Solutions, Global Electronics Association

DATA CENTER COOLING: MODULAR LIQUID COOLING

Modularity will be key to scaling liquid cooling in AI data centers. As AI workloads continue to drive power densities ever higher, data center operators will seek out more powerful, modular liquid cooling systems that can be easily deployed and scaled incrementally as thermal regulation needs grow. By late 2026, expect to see skidded, modular units starting at 2MW (and reaching well beyond) become the de facto models for high-density data center builds. 
Angela Taylor
Chief of Staff & Head of Strategy, LiquidStack

DATA CENTER COOLING: TWO-PHASE DIRECT-TO-CHIP COOLING

A wave of two-phase direct-to-chip cooling solutions will be announced. Two-phase direct-to-chip cooling technologies will become the successor to today's one-phase liquid cooling systems as rack densities climb up to and beyond one megawatt. Similar to the surge around modular liquid cooling systems in the second half of 2025, two-phase liquid cooling technologies will be announced in 2026. The cooling ecosystem will begin coalescing around the supply chain and standards needed to scale when the transition to two-phase direct-to-chip liquid cooling officially begins—which will likely happen in 2027.
Angela Taylor
Chief of Staff & Head of Strategy, LiquidStack

DATA CENTER INVESTMENT: CAPITAL STORM

2026's Data Center Capital Storm - AI and Compute Drive a New Era of Investment: The scale of investment required to support AI's growth is unlike anything we've seen before, and 2026 will mark the beginning of an unprecedented capital storm in data center infrastructure. The expected 15 gigawatts of US data center leasing activity in 2025 alone demands roughly $150 billion in infrastructure funding — before even accounting for the $600 billion in chips that will power those facilities. With interest rates expected to move meaningfully lower, a massive wave of capital — both equity and debt — will unlock, accelerating projects across every major hyperscale and colocation market. That's only the beginning. Leasing is projected to jump another 25% in 2026, pushing total capacity demand toward 19 GW and setting another record. The trajectory of compute demand is staggering as training runs that consume 150-200 MW today could reach multi-gigawatt scales by 2030. Additionally, US AI power requirements will potentially balloon from 5 GW today to 30-70 GW by 2030. Every year, the industry breaks new records, and in 2026, the flood of capital chasing AI infrastructure will redefine the boundaries of scale.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure

DATA CENTER INVESTMENT: TOKENS PER WATT PER DOLLAR

Data center investors and operators will trade in the classic PUE metric for "tokens per watt per dollar." Infrastructure buildout is beginning to shift the economics of AI, with data centers transitioning from cost centers to revenue generators. With this transition, metrics for success are shifting from sustainability and conventional efficiency toward revenue generation. The new, top-of-mind metric discussed in industry circles is "tokens per watt per dollar." This new focus means it is no longer about simply using less energy, but about using energy as efficiently as possible. Since power constraints are the threshold preventing data center growth, organizations must use the power they have most effectively. Stranded power represents lost revenue.
Kevin Roof
Director of Offer and Capture Management, LiquidStack

Hot Topics

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...