Franken-Monitoring - A Case of Too Many Tools
Most Organizations Have 11 or more tools to Manage Application Performance
August 06, 2015

Kalyan Ramanathan
AppDynamics

Share this

In a recent interview, an IT operations director told us, “We frankly have too many tools, and many of them weren’t performing to our expectations.”

If you are an enterprise ops leader managing complex applications, you can probably relate to that statement. At AppDynamics, we call this “Franken-monitoring,” a situation characterized by many, usually too many, siloed tools — for application, server, database, end-user client, etc. — that provide varying levels of disparate visibility into IT applications.

The challenges with this approach include:

■ Tools have minimal integration or common context, which makes it near impossible to manage the application or its business transactions.

■ Tools are designed for subject-matter experts, so it’s hard to provide value to the ops team as a whole.

■ Tools have high total cost of ownership, since every tool has to be independently procured, installed and managed, and staff have to be trained in their use.

2015 APM Tools Survey Finds That Tools Are Underutilized and Solving Performance Problems is Still a Massive Challenge

We commissioned analyst firm Enterprise Management Associates (EMA) to get to the bottom of this. In the 2015 APM Tools Survey, EMA found that a majority of surveyed enterprises have 11 or more commercial tools in their arsenal to manage application performance.

Nearly two-thirds of respondents report that it takes at least three hours to determine the root cause of performance issues; one-third report that it takes six or more hours to find the source of an issue.

EMA’s survey indicated that the lack of application-focused solutions appears to contribute to current IT challenges, with IT teams often trying to manage modern, complex applications with siloed tools and primarily manual processes. Just about every user of monitoring tools complains about the challenges of having too many tools without any situational awareness. Current approaches to integrate these tools with solutions like MoM (manager of managers) or CMDB (configuration management database) have for the most part failed, because it is hard to stitch together these disparate solutions from different vendors.

Gartner recently did a survey that pointed exactly to this challenge. The key reasons (besides price) for poor APM adoption were, indeed, the complexity of the tools and poor integration between tools.

Specifically, the EMA study found:

■ Siloed and shelved monitoring tools: 65 percent of the companies surveyed indicated that they own more than 10 different commercial monitoring products. Nearly half also indicated that 50 percent or fewer of their purchased tools are actively being used.

■ Manual resources expended on application support: According to respondents, calls from users are the second-most frequent way IT organizations find out about application-related problems (27 percent cited detection by monitoring centers; 25 percent cited user calls). Line staff, those closest to the problem, report a significantly higher incidence, citing user calls as their first “heads up” 35 percent of the time.

■ Extensive people-hours required to solve a single application problem: IT organizations surveyed indicated that, for those application-related problems escalated beyond Level 1 support, mean time to repair (MTTR) is most often between five and seven hours; in addition, between three and four people are typically required to solve a given problem.

“Based on our findings, the majority of companies are still trying to manage complex applications with a combination of siloed tools, ‘all hands on deck’ interactive marathons, and tribal knowledge,” said Julie Craig, Research Director, Application Management at EMA. “The ability to automatically discover and manage the business transaction topology as the application itself changes is a significant challenge encountered by virtually every IT organization.”

In addition to EMA’s finding that most companies have under-invested in application-specific management tools, the survey also found clear purchasing preferences regarding future APM purchases:

■ Almost 75 percent identified “flexible deployment options” (supporting SaaS, on-premises, and/or hybrid deployments) as either “critical or important” factors for purchasing an APM solution.

■ More than 70 percent identified the “ability to monitor infrastructure as a service (IaaS) public cloud” as either critical or important.

■ When asked about their top “must have” features for an APM product purchase, respondents selected the following:

#1 feature preference: An integrated monitoring platform consolidating application and infrastructure monitoring in one solution

#2 feature preference: Cloud-readiness features necessary to monitor/manage application components hosted in public cloud

#3 feature preference: Support for trending and reporting

The EMA study shows that very few IT organizations have an accurate, comprehensive view of today’s complex application environment, business transactions and their dependencies. Unified Monitoring is a new way to manage applications proactively, by tracing and monitoring transactions from the end user through the entire application and infrastructure environment to help quickly and proactively solve performance issues and ensure excellent user experience. Companies no longer need to waste valuable time and resources on a dozen different tools that will likely just collect dust on the shelf.

EMA Survey Methodology: AppDynamics commissioned EMA to conduct a survey in May 2015 of nearly 300 IT professionals from small, midsized and large companies across both North America and Europe. For the purposes of the study, respondents were filtered to include only those actively involved in enterprise application development/management/delivery at the executive, middle manager, or "hands on" line staff levels.

Kalyan Ramanathan is VP Marketing at AppDynamics.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...