New Data Reveals Widespread Downtime and Security Risks in 99% of Enterprise Private Cloud Environments
February 08, 2017

Doron Pinhas
Continuity Software

Share this

Industrial and technological revolutions happen because new manufacturing systems or technologies make life easier, less expensive, more convenient, or more efficient. It's been that way in every epoch – but Continuity Software's new study indicates that in the cloud era, there's still work to be done.

With the rise of cloud technology in recent years, Continuity Software conducted an analysis of live enterprise private cloud environments – and the results are not at all reassuring. According to configuration data gathered from over 100 enterprise environments over the past year, the study found that there were widespread performance issues in 97% of them, putting the IT system at great risk for downtime. Ranked by the participating enterprises as the greatest concern, downtime risks were still present in each of the tested environments.

A deep dive into the study findings revealed numerous reasons for the increased operational risk in private cloud environments, ranging from lack of awareness to critical vendor recommendations, inconsistent configuration across virtual infrastructure components and incorrect alignment between different technology layers (such as virtual networks and physical resources, storage and compute layers, etc.).

The downtime risks were not specific to any particular configuration of hardware, software, or operating system. Indeed, the studied enterprises used a diverse technology stack: 48% of the organizations are pure Windows shops, compared to 7% of the organizations that run primarily Linux. 46% of the organizations use a mix of operating systems. Close to three quarters (73%) of the organizations use EMC data storage systems and 27% of the organizations use replication for automated offsite data protection. And 12% utilized active-active failover for continuous availability.

Certainly in the companies in question, the IT departments include top engineers and administrators – yet nearly all of the top companies included in the study have experienced some, and in a few cases many, issues.

While the results are unsettling, they are certainly not surprising. The modern IT environment is extremely complex and volatile: changes are made daily by multiple teams in a rapidly evolving technology landscape. With daily patching, upgrades, capacity expansion, etc., the slightest miscommunication between teams, or a knowledge gap could result in hidden risks to the stability of the IT environment.

Unlike legacy systems, in which standard testing and auditing practices are employed regularly (typically once or twice a year), private cloud infrastructure is not regularly tested. Interestingly, this fact is not always fully realized, even by seasoned IT experts. Virtual infrastructure is often designed to be "self-healing," using features such as virtual machine High Availability and workload mobility. Indeed, some evidence is regularly provided to demonstrate that they are working; after all, IT executives may argue, "not a week goes by with some virtual machines failing over successfully."

This perception of safety can be misleading, since a chain is only as strong as its weakest link; Simply put, it's a number game. Over the course of any given week, only a minute fraction of the virtual machines will actually be failed-over – usually less than 1%. What about the other 99%? Is it realistic to expect they're also fully protected?

The only way to determine the private cloud is truly resilient would be to prove every possible permutation of failure could be successfully averted. Of course, this could not be accomplished with manual processes, which would be much too time consuming, and potentially disruptive. The only sustainable and scalable approach would be to automate private cloud configuration validation and testing.

Individual vendors offer basic health measurements for their solution stack (for example, VMware, Microsoft, EMC and others). While useful, this is far from a real solution, since, as the study shows, the majority of the issues occur due to incorrect alignment between the different layers. In recent years, more holistic solutions have entered the market, that offer vendor agnostic, cross-domain validation.

While such approaches come with a cost, it is by far less expensive than the alternative cost of experiencing a critical outage. The cost of a single hour of downtime, according to multiple industry studies, can easily reach hundreds of thousands of dollars (and, in some verticals even millions).

Doron Pinhas is CTO of Continuity Software.

Share this

The Latest

February 20, 2020

Over 70% of C-Suite decision makers believe business innovation and staff retention are driven by improved visibility into network and application performance, according to Rethink Possible: Visibility and Network Performance – The Pillars of Business Success, a survey
conducted by Riverbed ...

February 19, 2020

Modern enterprises rely upon their IT departments to deliver a seamless digital customer experience. Performance and availability are the foundational stepping stones to delivering that customer experience. Along those lines, this month we released a new research study titled the IT Downtime Detection and Mitigation Report that contains recommendations on how to best prevent, detect or mitigate brownouts and outages, given the context of today’s IT transformation trends ...

February 18, 2020

While Application Performance Management (APM) has become mainstream, with a majority of tech pros using APM tools regularly, there's work to be done to move beyond troubleshooting ...

February 13, 2020

Over the last few decades, IT departments have decreased budgets in part because of recession. As a result, they have are being asked to do more with less. The increase in work has amplified the need for automation ...

February 12, 2020

Many variables must align for optimum APM, and security is certainly among them. I offer the following APM predictions for 2020, which revolve around the reality that we will definitely begin to see much deeper integration of WAN technology on the security front. Look for this integration to take shape in the following ways ...

February 11, 2020

When it comes to growing a successful company, research shows it isn't about getting the most out of employees, but delivering an experience that empowers them to be and do their best. And according to Priming a New Era of Digital Wellness, a new study conducted by Quartz Insights in partnership with Citrix Systems, technology is the secret to doing so ...

February 10, 2020

Only 11% of website decision-makers feel that they have complete insight into the scripts that they use on their websites. However, industry estimates state that about 70% of the code on a website comes from a third-party library or service. Research highlights a clear need to raise awareness of the potential threats associated with the vulnerabilities inherent in third-party code ...

February 06, 2020

The ever-increasing access and speeds offered by today's modern networks offer many advantages to businesses and consumers, but also make the integrity of their performance and security more paramount than ever before. Organizations are struggling to manage the constant fluctuations in network conditions and security threats. This has prompted many to explore how automation can help to streamline network management and security processes ...

February 05, 2020

The demand to deliver a consistently positive and innovative customer experience is something that many companies — more specifically, their DevOps teams — are currently grappling with. While the ability to push out multiple features a week may appear as a great accomplishment for DevOps teams, our survey showed that 82% commonly discover bugs in production ...

February 04, 2020

Ensuring reliable data security is a critical part of Application Performance Management (APM) — or at least it should be. The fact is, as a result of our need for speed, increasingly development teams are confronted with the problem of releasing applications faster without compromising security ...