5 Reasons Application Performance Management Fails
March 27, 2015

Jonah Kowall
Kentik

Share this

After speaking to thousands of Application Performance Management (APM) users during my time with Gartner, I have seen the following 5 key issues that cause APM failures:

1. Organizational immaturity

The first cause of failure is the silos in many of today’s organizations. There are often too many stakeholders involved in APM decision-making, ranging from application support, server teams, network teams, database teams (DBAs), application developers, and various architects across the organization.

We’re also seeing more non-technical users, such as the business owner of an application interested in seeing usage and performance data on critical Business Transactions within the application. These business users will become a more central user of APM in the future.

It’s critical to identify the primary user of the product, and determine requirements focused on those primary users. Secondary users can have input but should not be the ones driving the key decision points. As products mature, they can sell into multiple areas or even cross sell through teams, but it shouldn’t be the focus of the initial implementation.

2. Ownership

Typically the excitement of an APM solution, the added visibility, and capabilities presented with an implementation provide immense value to operations, application support, and development. The implementation — if you select an easy to implement product — normally proceeds without many issues, and there is a clear owner or stakeholders of the product.

Over time, as roles and business direction changes, often APM loses its key owner. The result of this is that the product isn’t maintained or used day to day. The way to avoid this is to make the executives key stakeholders. As APM and Application Intelligence will become critical to business decision-making, changing what’s often been the fate of older APM products.

3. Application Complexity

APM tools are installed for two reasons. If APM is strategic it’s implemented during development or implementation of a project. The second driver for APM implementation is when the pain threshold gets too high, and something is needed to see the production environment to remediate issues. The lack of understanding or visibility in applications, both old and new, is normally the first benefit. You’ll often hear: “I didn’t know this was connecting to that?”

Issues occur within changing applications for two primary reasons. First, demands of business model changes driven by greater customer demand or higher volumes of data. The second reason for change is feature requests, requiring application changes. These two reasons for change can be distilled down to scale and complexity, making it harder to identify and correct issues (or passing this data to development to make corrective changes to the software).

4. Engineering Skills Required

With yesterday’s APM tools, the implementations were incredibly complex and time consuming. This was due to the amount of tuning and customization enterprises required. Companies which have failed in APM were normally due to having too heavy a services engagement. This has caused the likes of Optier to completely go out of business, and ITOM giants to rethink how they approach the market. Many of these companies even have staff members who would work full time at customer sites to keep the products up and running. These are often seen as benefits to the buyer, but eventually they become burdens.

Applications both in the enterprise and customer worlds have become easy to buy, implement, and show the value of the technology. This has permeated IT products as well. Buyers expect things to be easy and show value quickly. The APM winners today, and for the future build easy to implement products, and refuse to customize them or push a heavy services engagement.

The key is enabling customers, and not offloading the work of using the product or providing staff augmentation. If you are looking at managed services, select the right technology first, and then the managed services provider.

Many legacy APM tools are far too complex, with countless config files and GUI features to tune in order to get value out of the investment. You shouldn’t need to be a senior technologist to get results. Today’s modern tools are easy to understand, and often present information in a way that level-1 operations engineers get value from them.

5. Focus on the wrong thing

Selecting APM technology isn’t just about meeting the needs of your application today, but thinking about the future state of the applications and infrastructures. What is considered experimental and bleeding edge eventually become standard components of traditional enterprise applications. We’ve already seen this happen with PHP, and we’re beginning to see this with other languages. Today you may be a Java shop on VMware, and possibly even a PHP user on LAMP, but in the future you will likely be a node.js shop, possibly running on a public PaaS.

Most organizational leaders have a strategy for both private and public cloud, where areas of business innovation and differentiation tend to be built on public clouds. This is the reason Gartner states that “IT spending on public cloud services is growing more than five times faster than growth in IT spending across all categories.”

Similarly, your organization may not have a large mobile investment today, but I can assure you will in the future. In order to handle these shifts many applications are moving from a single programming language to being composed of multiple languages. These technology shifts are requiring people with new broader skills, or people who can learn new skills quickly. The path towards the full stack developer or IT operations generalist show many are evolving to meet these new challenges to meet business agility requirements.

Regardless whether these proof points or discussions match your organization, the ability to support past investments, existing investments, but most importantly future investments, is critical when selecting APM technologies. Areas of growth and innovation are critical to senior management, hence will provide the most value to the business. These challenges are being addressed by the APM innovators. Keep that in mind when selecting application management technology, keeping in mind the depth and context of the monitoring and analytics.

Jonah Kowall is CTO of Kentik
Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...