Lack of Infrastructure Visibility Puts Businesses at Risk
July 31, 2018

Len Rosenthal
Virtual Instruments

Share this

Most enterprises lack the complete visibility required to avoid business-impacting application outages and slowdowns – resulting in nearly 90 percent of enterprises being unable to consistently meet service level agreements (SLAs) for their business-critical applications, according to a recent survey conducted by Dimensional Research and Virtual Instruments. This research indicates a serious gap in IT operations teams' ability to monitor their enterprises' highly virtualized, multi-vendor hybrid data center environments, and the results show that this lack of visibility is significantly impacting business.

Blind Spots, Slowdowns and Outages Abound

59 percent of application outages and performance problems are related to infrastructure

The reality is that large enterprises endure a substantial number of application outages and performance issues every year, and an overwhelming number of those surveyed indicated that a slowdown impacts businesses just as much as a full outage.

86 percent of users experience two or more significant outages a year, with 61 percent suffering from four or more in the same period.

59 percent of application outages and performance problems are related to infrastructure, which begs the question: why can't IT teams see these problems coming, and what's getting in the way of timely resolution?

Too Many Cooks in the Kitchen

There are many dozens of infrastructure and application monitoring tools available to enterprises, so why does this visibility gap still exist?

This research showed that it's not necessarily a lack of tools that may be causing the problem, but rather the combination of too many silo-specific tools. In fact, more than 70 percent of respondents use more than five IT infrastructure monitoring tools, and 15 percent use more than 20!

But despite this plethora of tools, 54 percent of companies lack full visibility into their infrastructure and application workload behavior, and 42 percent of companies operate primarily in "reactive mode" when managing their infrastructure.

Teamwork Makes the Dream Work

When it comes to the modern enterprise, there's no single internal team that can accurately manage and assess application performance requirements. However, less than half of enterprises take a collaborative approach to establishing performance requirements for new data center infrastructure. With no collective understanding of how applications relate to the underlying infrastructure, the resulting blind spots cause chain reactions that leave enterprises highly exposed.

79 percent of application outages and other issues directly impact customers

Nearly 40 percent of enterprises say that performance issues related to infrastructure are the most challenging to resolve, and when you consider that 79 percent of application outages and other issues directly impact customers, there just isn't room for guessing.

Deeper Insights Are the Key

The lack of visibility and proactive infrastructure and application management contributes to a lack of confidence from IT teams and their executives. In fact, 62 percent doubt that their current infrastructure would be able to meet their projected performance needs in the next two years, and two-thirds of respondents feel that they're often held personally responsible for application outages and slowdowns.

In addition, with an increasing number of applications being deployed in public clouds, nearly 65 percent are concerned about the perceived value of the internal IT infrastructure team to the business.

As discouraging as these findings may seem, the numbers indicate a strong opportunity for engineering, operations and application teams to come together and gain a deeper understanding of the impact of their applications on the underlying infrastructure, and visa versa. Since applications and infrastructure are intertwined to the point where they can no longer be viewed as distinct entities, an infrastructure monitoring approach that understands application workload behavior is essential to performance assurance.

The bottom line is that in today's highly competitive business environment, enterprises cannot afford to test their customers' limited patience by having an unacceptable number of application outages or slowdowns.

Len Rosenthal is CMO at Virtual Instruments
Share this

The Latest

October 19, 2018

APM is becoming more complex as the days go by. Server virtualization and cloud-based systems with containers and orchestration layers are part of this growing complexity, especially as the number of data sources increases and continues to change dynamically. To keep up with this changing environment, you will need to automate as many of your systems as possible. Open APIs can be an effective way to combat this scenario ...

October 18, 2018

Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...

October 17, 2018

We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...

October 15, 2018

Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...

October 12, 2018

Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...

October 11, 2018

AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...

October 09, 2018

Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...

October 04, 2018

TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...

October 02, 2018

Retailers that have readily adopted digital technologies have experienced a 6% CAGR revenue growth over a 3-year period, while other retailers that have explored digital without a full commitment to broad implementation experienced flat growth over the same period ...

October 01, 2018

As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency. However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle ...