The total number of datacenters (of all types) in the United States declined for the first time in 2009, falling by 0.7%. triggered by the economic crisis of 2008 and the resultant closing of hundreds and thousands of remote locations with server closets and rooms. At the same time, total datacenter capacity grew by slightly more than 1% as larger datacenter environments continued to rise despite the economic slowdown. According to new research from International Data Corporation (IDC), these trends have continued in the years since 2009 and reflect a major change in datacenter and IT asset deployment that will accelerate further in coming years.
The dynamic driving these changes in the US datacenter market center around the fast-growing array of applications and devices used to communicate and conduct business, the rapid digitization of vast amounts of unstructured data, and the desire to collect, store, and analyze this information in ever-greater volume and detail. This dynamic has had a significant impact on how businesses build, organize, and invest in datacenter facilities and assets.
"CIOs are increasingly being asked to improve business agility while reducing the cost of doing business through aggressive use of technologies in the datacenter," said Rick Villars, vice president, Datacenter and Cloud Research at IDC. "At the same time, they have to ensure the integrity of the business and its information assets in the face of natural disasters, datacenter disruptions, or local system failures. To achieve both sets of objectives, IT decision makers had to rethink their approach to the datacenter."
The most notable factor reshaping datacenter dynamics was the dramatic increase in the use of server virtualization to consolidate server assets. Virtualization and server consolidation drove significant declines in physical datacenter size and eliminated the need for many smaller datacenters as applications were moved to larger central datacenters. It also made investments in power and energy management that much more critical for datacenter managers.
While the aggressive use of virtualization has reduced the rate of growth in server deployments in datacenters, the creation, organization, and distribution of files and rich content are creating a rapid and sustained increase in storage deployments. One of the key characteristics of the content explosion is data centralization, driven by performance, compliance, and scale requirements. As a result, midsize and large datacenters are the main segments where the content explosion is having a major impact.
A third factor shaping the datacenter dynamic has been the shift toward a cloud model for application, platform, and infrastructure delivery. Here the focus is on extending the value and scale of virtualization by boosting operational efficiency and improving IT agility. Along with the content explosion, the buildout of public cloud offerings is driving major growth in the number and size of larger datacenters.
Combined, these factors will continue to drive a slow but steady decline in the number and size of smaller internal datacenters. For similar reasons, large internal datacenters will not grow at anywhere near the same rate as very large datacenters operated by service providers.
By 2016, IDC expects the total number of datacenters in the US will decline from 2.94 million in 2012 to 2.89 million. This decline will be concentrated in internal server rooms and closets, with a very small decline in mid-sized local datacenters.
Despite the slight decline in total datacenters, total datacenter space will increase significantly, growing from 611.4 million square feet in 2012 to more than 700 million square feet in 2016. By the end of the forecast period, IDC expects service providers will account for more than a quarter of all large datacenter capacity in place in the United States.
The IDC report, U.S. Datacenter 2012-2016 Forecast (Doc #237070) provides a census of U.S. datacenters by size, sophistication, and ownership. The report provides a forecast of datacenter investment plans through 2016 and assesses the impact of changing industry business models as well as IT and network developments on datacenter design, build, and management. The report also includes a new datacenter taxonomy based on a multitude of factors, including scope of IT personnel control, physical location, types of applications supported, power and cooling, downtime, floor area, and staff skill sets.
The Latest
Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...
In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ...
Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...
Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...
Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...
The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...
The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...
In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...
AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.
The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...