Complacency Kills Uptime in Virtualized Environments
April 10, 2018

Chris Adams
Park Place Technologies

Share this

Risk is relative. For example, studies have shown that wearing seatbelts can reduce highway safety, while more padding on hockey and American football players can increase injuries. It's called the Peltzman Effect and it describes how humans change behavior when risk factors are reduced. They often act more recklessly and drive risk right back up.

The phenomenon is recognized by many economists, its effects have been studied in the field of medicine, and I'd argue it is at the root of an interesting trend in IT — namely the increasing cost of downtime despite our more reliable virtualized environments.

Downtime Costs Are Rising

A study by the Ponemon Institute , for example, found the average cost of data center outages rose from $505,502 in 2010 to $740,357 in 2016. And the maximum cost was up 81% over the same time period, reaching over $2.4 million.

There are a lot of factors represented in these figures. For example, productivity losses are higher because labor costs are, and missed business opportunities are worth more today than they were several years ago. Yet advancements like virtual machines (VMs) with their continuous mirroring and seamless backups have not slashed downtime costs to the degree many IT pros had once predicted.

Have we as IT professionals dropped our defensive stance because we believe too strongly in the power of VMs and other technologies to save us? There are some signs that we have. For all the talk of cyberattacks—well deserved as it is—they cause only 10% of downtime. Hardware failures, on the other hand, account for 40%, according to Network Computing. And the Ponemon research referenced above found simple UPS problems to be at the root of one-quarter of outages.

Of course, VMs alone are not to blame, but it's worth looking at how downtime costs can increase when businesses rely on high-availability, virtually partitioned servers.

3 VM-Related Reasons for the Trend

The problem with VMs generally boils down to an "all eggs in one basket" problem. Separate workloads that would previously have run on multiple physical servers are consolidated to one server. Mirroring, automatic failover, and backups are intended to reduce risk associated with this single point of failure, but when these tactics fall through or complicated issues cascade, the resulting downtime can be especially costly for several reasons.

1. Utilization rates are higher

Work by McKinsey & Company and Gartner both pegged utilization rates for non-virtualized servers in the 6% to 12% range. With VMs, however, utilization typically approaches 30% and often stretches far higher. These busy servers are processing more workloads so downtime impacts are multiplied.

2. More customers are affected

Internal and external customers are accustomed to using VMs to share physical servers, so outages now affect a greater variety of workloads. This expands business consequences. A co-location provider could easily face irate calls and emails from dozens of clients, and a corporate data center manager could see complaints rise from the help desk to the C suite.

3. Complexity is prolonging downtime

Virtualization projects were supposed to simplify data centers but many have not, according to CIO Magazine. In their survey, respondents said they experience an average of 16 outages per year, 11 of which were caused by system failure resulting from complexity. And more complex systems are more difficult to troubleshoot and repair, making for longer downtime and higher overall costs.

Read Part 2: Solutions for Minimizing Server Downtime

Chris Adams is President and COO of Park Place Technologies
Share this

The Latest

April 26, 2018

The growing urgency of enterprises to digitally transform their business operations and enhance customer experience was the driving force behind much of the growth in outsourcing innovation, contract awards and spending in 2017, according to the ISG Momentum Annual Report ...

April 25, 2018

Organizations are embracing digital transformation, as 89% have plans to adopt or have already adopted a digital-first business strategy, according to the 2018 IDG Digital Business Survey ...

April 24, 2018

Managing emerging technologies such as Cloud, microservices and containers and SDx are driving organizations to redefine their IT monitoring strategies, according to a new study titled 17 Areas Shaping the Information Technology Operations Market in 2018 from Digital Enterprise Journal (DEJ) ...

April 23, 2018

Balancing digital innovation with security is critical to helping businesses deliver strong digital experiences, influencing factors such maintaining a competitive edge, customer satisfaction, customer trust, and risk mitigation. But some businesses struggle to meet that balance according to new data ...

April 19, 2018

In the course of researching, documenting and advising on user experience management needs and directions for more than a decade, I've found myself waging a quiet (and sometimes not so quiet) war with several industry assumptions. Chief among these is the notion that user experience management (UEM) is purely a subset of application performance management (APM). This APM-centricity misses some of UEM's most critical value points, and in a basic sense fails to recognize what UEM is truly about ...

April 18, 2018

We now live in the kind of connected world where established businesses that are not evolving digitally are in jeopardy of becoming extinct. New research shows companies are preparing to make digital transformation a priority in the near future. However most of them have a long way to go before achieving any kind of mastery over the multiple disciples required to effectively innovate ...

April 17, 2018

IT Transformation can result in bottom-line benefits that drive business differentiation, innovation and growth, according to new research conducted by Enterprise Strategy Group (ESG) ...

April 16, 2018

While regulatory compliance is an important activity for medium to large businesses, easy and cost-effective solutions can be difficult to find. Network visibility is an often overlooked, but critically important, activity that can help lower costs and make life easier for IT personnel that are responsible for these regulatory compliance solutions ...

April 12, 2018

This is the third in a series of three blogs directed at recent EMA research on the digital war room. In this blog, we'll look at three areas that have emerged in a spotlight in and of themselves — as signs of changing times — let alone as they may impact digital war room decision making. They are the growing focus on development and agile/DevOps; the impacts of cloud; and the growing need for security and operations (SecOps) to team more effectively ...

April 11, 2018

As we've seen, hardware is at the root of a large proportion of data center outages, and the costs and consequences are often exacerbated when VMs are affected. The best answer, therefore, is for IT pros to get back to basics ...