Complacency Kills Uptime in Virtualized Environments
April 10, 2018

Chris Adams
Park Place Technologies

Share this

Risk is relative. For example, studies have shown that wearing seatbelts can reduce highway safety, while more padding on hockey and American football players can increase injuries. It's called the Peltzman Effect and it describes how humans change behavior when risk factors are reduced. They often act more recklessly and drive risk right back up.

The phenomenon is recognized by many economists, its effects have been studied in the field of medicine, and I'd argue it is at the root of an interesting trend in IT — namely the increasing cost of downtime despite our more reliable virtualized environments.

Downtime Costs Are Rising

A study by the Ponemon Institute , for example, found the average cost of data center outages rose from $505,502 in 2010 to $740,357 in 2016. And the maximum cost was up 81% over the same time period, reaching over $2.4 million.

There are a lot of factors represented in these figures. For example, productivity losses are higher because labor costs are, and missed business opportunities are worth more today than they were several years ago. Yet advancements like virtual machines (VMs) with their continuous mirroring and seamless backups have not slashed downtime costs to the degree many IT pros had once predicted.

Have we as IT professionals dropped our defensive stance because we believe too strongly in the power of VMs and other technologies to save us? There are some signs that we have. For all the talk of cyberattacks—well deserved as it is—they cause only 10% of downtime. Hardware failures, on the other hand, account for 40%, according to Network Computing. And the Ponemon research referenced above found simple UPS problems to be at the root of one-quarter of outages.

Of course, VMs alone are not to blame, but it's worth looking at how downtime costs can increase when businesses rely on high-availability, virtually partitioned servers.

3 VM-Related Reasons for the Trend

The problem with VMs generally boils down to an "all eggs in one basket" problem. Separate workloads that would previously have run on multiple physical servers are consolidated to one server. Mirroring, automatic failover, and backups are intended to reduce risk associated with this single point of failure, but when these tactics fall through or complicated issues cascade, the resulting downtime can be especially costly for several reasons.

1. Utilization rates are higher

Work by McKinsey & Company and Gartner both pegged utilization rates for non-virtualized servers in the 6% to 12% range. With VMs, however, utilization typically approaches 30% and often stretches far higher. These busy servers are processing more workloads so downtime impacts are multiplied.

2. More customers are affected

Internal and external customers are accustomed to using VMs to share physical servers, so outages now affect a greater variety of workloads. This expands business consequences. A co-location provider could easily face irate calls and emails from dozens of clients, and a corporate data center manager could see complaints rise from the help desk to the C suite.

3. Complexity is prolonging downtime

Virtualization projects were supposed to simplify data centers but many have not, according to CIO Magazine. In their survey, respondents said they experience an average of 16 outages per year, 11 of which were caused by system failure resulting from complexity. And more complex systems are more difficult to troubleshoot and repair, making for longer downtime and higher overall costs.

Read Part 2: Solutions for Minimizing Server Downtime

Chris Adams is President and COO of Park Place Technologies
Share this

The Latest

October 23, 2018

For anyone that's been in a war room, there's no denying that it can be an intense place. Teams go to the war room to win. But, the ideal outcome is a solid plan or solution designed to deliver the best outcome while utilizing the least resources. What are some of the key triggers that drive IT teams into the war room and how can you prepare yourself to contribute in a positive way? ...

October 22, 2018

With Black Friday and Cyber Monday just weeks away, Catchpoint has identified the top five technical items most likely to cause web or mobile shopping sites to perform poorly ...

October 19, 2018

APM is becoming more complex as the days go by. Server virtualization and cloud-based systems with containers and orchestration layers are part of this growing complexity, especially as the number of data sources increases and continues to change dynamically. To keep up with this changing environment, you will need to automate as many of your systems as possible. Open APIs can be an effective way to combat this scenario ...

October 18, 2018

Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...

October 17, 2018

We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...

October 15, 2018

Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...

October 12, 2018

Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...

October 11, 2018

AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...

October 09, 2018

Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...

October 04, 2018

TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...