Skip to main content

Franken-Monitoring - A Case of Too Many Tools

Most Organizations Have 11 or more tools to Manage Application Performance
Kalyan Ramanathan

In a recent interview, an IT operations director told us, “We frankly have too many tools, and many of them weren’t performing to our expectations.”

If you are an enterprise ops leader managing complex applications, you can probably relate to that statement. At AppDynamics, we call this “Franken-monitoring,” a situation characterized by many, usually too many, siloed tools — for application, server, database, end-user client, etc. — that provide varying levels of disparate visibility into IT applications.

The challenges with this approach include:

■ Tools have minimal integration or common context, which makes it near impossible to manage the application or its business transactions.

■ Tools are designed for subject-matter experts, so it’s hard to provide value to the ops team as a whole.

■ Tools have high total cost of ownership, since every tool has to be independently procured, installed and managed, and staff have to be trained in their use.

2015 APM Tools Survey Finds That Tools Are Underutilized and Solving Performance Problems is Still a Massive Challenge

We commissioned analyst firm Enterprise Management Associates (EMA) to get to the bottom of this. In the 2015 APM Tools Survey, EMA found that a majority of surveyed enterprises have 11 or more commercial tools in their arsenal to manage application performance.

Nearly two-thirds of respondents report that it takes at least three hours to determine the root cause of performance issues; one-third report that it takes six or more hours to find the source of an issue.

EMA’s survey indicated that the lack of application-focused solutions appears to contribute to current IT challenges, with IT teams often trying to manage modern, complex applications with siloed tools and primarily manual processes. Just about every user of monitoring tools complains about the challenges of having too many tools without any situational awareness. Current approaches to integrate these tools with solutions like MoM (manager of managers) or CMDB (configuration management database) have for the most part failed, because it is hard to stitch together these disparate solutions from different vendors.

Gartner recently did a survey that pointed exactly to this challenge. The key reasons (besides price) for poor APM adoption were, indeed, the complexity of the tools and poor integration between tools.

Specifically, the EMA study found:

■ Siloed and shelved monitoring tools: 65 percent of the companies surveyed indicated that they own more than 10 different commercial monitoring products. Nearly half also indicated that 50 percent or fewer of their purchased tools are actively being used.

■ Manual resources expended on application support: According to respondents, calls from users are the second-most frequent way IT organizations find out about application-related problems (27 percent cited detection by monitoring centers; 25 percent cited user calls). Line staff, those closest to the problem, report a significantly higher incidence, citing user calls as their first “heads up” 35 percent of the time.

■ Extensive people-hours required to solve a single application problem: IT organizations surveyed indicated that, for those application-related problems escalated beyond Level 1 support, mean time to repair (MTTR) is most often between five and seven hours; in addition, between three and four people are typically required to solve a given problem.

“Based on our findings, the majority of companies are still trying to manage complex applications with a combination of siloed tools, ‘all hands on deck’ interactive marathons, and tribal knowledge,” said Julie Craig, Research Director, Application Management at EMA. “The ability to automatically discover and manage the business transaction topology as the application itself changes is a significant challenge encountered by virtually every IT organization.”

In addition to EMA’s finding that most companies have under-invested in application-specific management tools, the survey also found clear purchasing preferences regarding future APM purchases:

■ Almost 75 percent identified “flexible deployment options” (supporting SaaS, on-premises, and/or hybrid deployments) as either “critical or important” factors for purchasing an APM solution.

■ More than 70 percent identified the “ability to monitor infrastructure as a service (IaaS) public cloud” as either critical or important.

■ When asked about their top “must have” features for an APM product purchase, respondents selected the following:

#1 feature preference: An integrated monitoring platform consolidating application and infrastructure monitoring in one solution

#2 feature preference: Cloud-readiness features necessary to monitor/manage application components hosted in public cloud

#3 feature preference: Support for trending and reporting

The EMA study shows that very few IT organizations have an accurate, comprehensive view of today’s complex application environment, business transactions and their dependencies. Unified Monitoring is a new way to manage applications proactively, by tracing and monitoring transactions from the end user through the entire application and infrastructure environment to help quickly and proactively solve performance issues and ensure excellent user experience. Companies no longer need to waste valuable time and resources on a dozen different tools that will likely just collect dust on the shelf.

EMA Survey Methodology: AppDynamics commissioned EMA to conduct a survey in May 2015 of nearly 300 IT professionals from small, midsized and large companies across both North America and Europe. For the purposes of the study, respondents were filtered to include only those actively involved in enterprise application development/management/delivery at the executive, middle manager, or "hands on" line staff levels.

Kalyan Ramanathan is VP Marketing at AppDynamics.

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Franken-Monitoring - A Case of Too Many Tools

Most Organizations Have 11 or more tools to Manage Application Performance
Kalyan Ramanathan

In a recent interview, an IT operations director told us, “We frankly have too many tools, and many of them weren’t performing to our expectations.”

If you are an enterprise ops leader managing complex applications, you can probably relate to that statement. At AppDynamics, we call this “Franken-monitoring,” a situation characterized by many, usually too many, siloed tools — for application, server, database, end-user client, etc. — that provide varying levels of disparate visibility into IT applications.

The challenges with this approach include:

■ Tools have minimal integration or common context, which makes it near impossible to manage the application or its business transactions.

■ Tools are designed for subject-matter experts, so it’s hard to provide value to the ops team as a whole.

■ Tools have high total cost of ownership, since every tool has to be independently procured, installed and managed, and staff have to be trained in their use.

2015 APM Tools Survey Finds That Tools Are Underutilized and Solving Performance Problems is Still a Massive Challenge

We commissioned analyst firm Enterprise Management Associates (EMA) to get to the bottom of this. In the 2015 APM Tools Survey, EMA found that a majority of surveyed enterprises have 11 or more commercial tools in their arsenal to manage application performance.

Nearly two-thirds of respondents report that it takes at least three hours to determine the root cause of performance issues; one-third report that it takes six or more hours to find the source of an issue.

EMA’s survey indicated that the lack of application-focused solutions appears to contribute to current IT challenges, with IT teams often trying to manage modern, complex applications with siloed tools and primarily manual processes. Just about every user of monitoring tools complains about the challenges of having too many tools without any situational awareness. Current approaches to integrate these tools with solutions like MoM (manager of managers) or CMDB (configuration management database) have for the most part failed, because it is hard to stitch together these disparate solutions from different vendors.

Gartner recently did a survey that pointed exactly to this challenge. The key reasons (besides price) for poor APM adoption were, indeed, the complexity of the tools and poor integration between tools.

Specifically, the EMA study found:

■ Siloed and shelved monitoring tools: 65 percent of the companies surveyed indicated that they own more than 10 different commercial monitoring products. Nearly half also indicated that 50 percent or fewer of their purchased tools are actively being used.

■ Manual resources expended on application support: According to respondents, calls from users are the second-most frequent way IT organizations find out about application-related problems (27 percent cited detection by monitoring centers; 25 percent cited user calls). Line staff, those closest to the problem, report a significantly higher incidence, citing user calls as their first “heads up” 35 percent of the time.

■ Extensive people-hours required to solve a single application problem: IT organizations surveyed indicated that, for those application-related problems escalated beyond Level 1 support, mean time to repair (MTTR) is most often between five and seven hours; in addition, between three and four people are typically required to solve a given problem.

“Based on our findings, the majority of companies are still trying to manage complex applications with a combination of siloed tools, ‘all hands on deck’ interactive marathons, and tribal knowledge,” said Julie Craig, Research Director, Application Management at EMA. “The ability to automatically discover and manage the business transaction topology as the application itself changes is a significant challenge encountered by virtually every IT organization.”

In addition to EMA’s finding that most companies have under-invested in application-specific management tools, the survey also found clear purchasing preferences regarding future APM purchases:

■ Almost 75 percent identified “flexible deployment options” (supporting SaaS, on-premises, and/or hybrid deployments) as either “critical or important” factors for purchasing an APM solution.

■ More than 70 percent identified the “ability to monitor infrastructure as a service (IaaS) public cloud” as either critical or important.

■ When asked about their top “must have” features for an APM product purchase, respondents selected the following:

#1 feature preference: An integrated monitoring platform consolidating application and infrastructure monitoring in one solution

#2 feature preference: Cloud-readiness features necessary to monitor/manage application components hosted in public cloud

#3 feature preference: Support for trending and reporting

The EMA study shows that very few IT organizations have an accurate, comprehensive view of today’s complex application environment, business transactions and their dependencies. Unified Monitoring is a new way to manage applications proactively, by tracing and monitoring transactions from the end user through the entire application and infrastructure environment to help quickly and proactively solve performance issues and ensure excellent user experience. Companies no longer need to waste valuable time and resources on a dozen different tools that will likely just collect dust on the shelf.

EMA Survey Methodology: AppDynamics commissioned EMA to conduct a survey in May 2015 of nearly 300 IT professionals from small, midsized and large companies across both North America and Europe. For the purposes of the study, respondents were filtered to include only those actively involved in enterprise application development/management/delivery at the executive, middle manager, or "hands on" line staff levels.

Kalyan Ramanathan is VP Marketing at AppDynamics.

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...