New Data Reveals Widespread Downtime and Security Risks in 99% of Enterprise Private Cloud Environments
February 08, 2017

Doron Pinhas
Continuity Software

Share this

Industrial and technological revolutions happen because new manufacturing systems or technologies make life easier, less expensive, more convenient, or more efficient. It's been that way in every epoch – but Continuity Software's new study indicates that in the cloud era, there's still work to be done.

With the rise of cloud technology in recent years, Continuity Software conducted an analysis of live enterprise private cloud environments – and the results are not at all reassuring. According to configuration data gathered from over 100 enterprise environments over the past year, the study found that there were widespread performance issues in 97% of them, putting the IT system at great risk for downtime. Ranked by the participating enterprises as the greatest concern, downtime risks were still present in each of the tested environments.

A deep dive into the study findings revealed numerous reasons for the increased operational risk in private cloud environments, ranging from lack of awareness to critical vendor recommendations, inconsistent configuration across virtual infrastructure components and incorrect alignment between different technology layers (such as virtual networks and physical resources, storage and compute layers, etc.).

The downtime risks were not specific to any particular configuration of hardware, software, or operating system. Indeed, the studied enterprises used a diverse technology stack: 48% of the organizations are pure Windows shops, compared to 7% of the organizations that run primarily Linux. 46% of the organizations use a mix of operating systems. Close to three quarters (73%) of the organizations use EMC data storage systems and 27% of the organizations use replication for automated offsite data protection. And 12% utilized active-active failover for continuous availability.

Certainly in the companies in question, the IT departments include top engineers and administrators – yet nearly all of the top companies included in the study have experienced some, and in a few cases many, issues.

While the results are unsettling, they are certainly not surprising. The modern IT environment is extremely complex and volatile: changes are made daily by multiple teams in a rapidly evolving technology landscape. With daily patching, upgrades, capacity expansion, etc., the slightest miscommunication between teams, or a knowledge gap could result in hidden risks to the stability of the IT environment.

Unlike legacy systems, in which standard testing and auditing practices are employed regularly (typically once or twice a year), private cloud infrastructure is not regularly tested. Interestingly, this fact is not always fully realized, even by seasoned IT experts. Virtual infrastructure is often designed to be "self-healing," using features such as virtual machine High Availability and workload mobility. Indeed, some evidence is regularly provided to demonstrate that they are working; after all, IT executives may argue, "not a week goes by with some virtual machines failing over successfully."

This perception of safety can be misleading, since a chain is only as strong as its weakest link; Simply put, it's a number game. Over the course of any given week, only a minute fraction of the virtual machines will actually be failed-over – usually less than 1%. What about the other 99%? Is it realistic to expect they're also fully protected?

The only way to determine the private cloud is truly resilient would be to prove every possible permutation of failure could be successfully averted. Of course, this could not be accomplished with manual processes, which would be much too time consuming, and potentially disruptive. The only sustainable and scalable approach would be to automate private cloud configuration validation and testing.

Individual vendors offer basic health measurements for their solution stack (for example, VMware, Microsoft, EMC and others). While useful, this is far from a real solution, since, as the study shows, the majority of the issues occur due to incorrect alignment between the different layers. In recent years, more holistic solutions have entered the market, that offer vendor agnostic, cross-domain validation.

While such approaches come with a cost, it is by far less expensive than the alternative cost of experiencing a critical outage. The cost of a single hour of downtime, according to multiple industry studies, can easily reach hundreds of thousands of dollars (and, in some verticals even millions).

Doron Pinhas is CTO of Continuity Software.

Share this

The Latest

December 15, 2017

CIOs around the globe are more determined than ever to achieve digital transformation within their organizations despite setbacks, according to a survey by Logicalis ...

December 14, 2017

The Spiceworks 2018 IT Career Outlook found that 32 percent of IT professionals plan to search for or take an IT job with a new employer in the next 12 months ...

December 12, 2017

Downtime and security risks were present in each cloud environment tested, according to 2016 Private Cloud Resiliency Benchmarks, a report from Continuity Software ...

December 11, 2017

Companies that empower employees with the applications they want and need, and make them readily accessible — anytime, anywhere, on any device — can benefit from measurable gains at the individual and organizational level, according to a survey, The Impact of the Digital Workforce: A New Equilibrium of the Digitally Transformed Enterprise, conducted by VMware ...

December 08, 2017

Metrics-oriented thinking is key to continuous improvement – and a core tenant of any agile or DevOps philosophy. Metrics are factual and once agreed upon, these facts are used to drive discussions and methods. They also allow for a collaborative effort to execute decisions that contribute towards business outcomes ...

December 06, 2017

The recent outage of the University of Cambridge website hosting Stephen Hawking's doctoral thesis is a prime example of what happens when niche websites become exposed to mainstream levels of traffic ...

December 05, 2017

Even as many organizations continue to adopt multi-cloud technologies as part of their dramatic transformation, the mainframe remains a relevant and growing data center hub for many, according to BMC's 12th annual Mainframe Research Report ...

December 04, 2017

Banks are laying the foundation for the digitization of their businesses and anticipate emerging technologies -- from IoT to biometric authentications and blockchain -- to make a substantial imprint on the industry within five years, according to a recent survey of banking professionals commissioned by VMware ...

December 01, 2017

A recent blog on APMdigest — Protecting Network Performance is as Essential as Securing the Network — mentions that performance issues and outages are possible when security tools (like an IPS, WAF, etc.) are inserted inline. However, one easy way to mitigate this concern is to deploy a bypass switch before the inline tool ...

November 30, 2017

While self-service and self-help IT are in common practice, about half of organizations surveyed are still struggling with full deployment and realizing its value, according to a new report by Ivanti and the Service Desk Institute ...