Skip to main content

Reports of APM's Death Have Been Greatly Exaggerated

Recently, Art Wittmann at InformationWeek claimed that the APM industry is dying. He wrote, “App performance management is seen as less important than it was two years ago, partly because vendors haven’t kept up.” And he was armed with ample data to support his view.

Looking at survey results from hundreds of APM customers, InformationWeek’s data suggests that the high cost and lengthy implementation process of APM is a driving factor in the fall of the industry: insufficient expertise to use the product (50%), high cost (41%), and taking too much staff time to do it right (32%). Interestingly, while the dissatisfaction with APM has increased, the rate of daily outages continues to rise, from 8% in 2010 to 10% today.

The question I pose is this – is there something else to be interpreted from this data? I would argue it is not APM as a whole that is dying but rather legacy APM solutions. The increase in daily outages suggests that APM is more important than ever before but that the industry itself isn't keeping up.

Legacy APM systems have several well-documented problems that have lead to user dissatisfaction for years. These products, which require configuration at each component for correct monitoring, come with high costs and long implementation cycles.

For APM to succeed, the industry must focus on deployment efficiency: actual install effort, supporting infrastructure effort including sufficient, scalable server space; initial configuration effort; and maintenance configuration effort. Initial configuration effort must be improved and rules and self-learning should reduce or eliminate maintenance configuration effort.

If these problems disappear, APM tools are much more attractive again. The survey respondents’ complaints about insufficient expertise (50%) and too much time (32%) are effectively mitigated by auto detection and self-learning.

Wittmann also believes that APM tools have failed to keep up with complexity – and that it is too difficult to set up APM tools in a service-oriented design. Again, the common theme here is ease of use. For APM to be truly helpful, the data has to be managed and presented in a way that can be used both without training for novices, and minimal training for expert users (more advanced functions).

APM is not just for developers anymore – and the industry has to adjust accordingly. IT operations, app owners and infrastructure folks need to have understandable and actionable data. In a sense, Wittmann is correct: if you rely on data from siloed monitoring tools (developer specific, web server specific, CPU monitoring, etc.), you won't gather meaningful information.

But he is too broad in his assessment. A transaction-centric approach to APM gives organizations a big-picture view of the interaction between end users, applications, and infrastructure. This view can pinpoint the source of problems quickly because you trace 100% of user transactions.

Wittmann is not wrong that legacy APM tools struggle with the growing complexity in IT, especially in the cloud. But there is reason to be optimistic about the demonstrated potential APM has for contributing to the overall success of complex IT operations. Mission-critical application deployments, and therefore the overall success of a company deploying these apps, depend on it.

ABOUT Tom Batchelor

Tom Batchelor is the Senior Solutions Architect at Correlsense and is responsible for creating innovative solutions geared specifically to the needs of clients. Prior to joining Correlsense, he worked in various pre-sales roles for OpTier and Symantec.

Related Links:

www.correlsense.com

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Reports of APM's Death Have Been Greatly Exaggerated

Recently, Art Wittmann at InformationWeek claimed that the APM industry is dying. He wrote, “App performance management is seen as less important than it was two years ago, partly because vendors haven’t kept up.” And he was armed with ample data to support his view.

Looking at survey results from hundreds of APM customers, InformationWeek’s data suggests that the high cost and lengthy implementation process of APM is a driving factor in the fall of the industry: insufficient expertise to use the product (50%), high cost (41%), and taking too much staff time to do it right (32%). Interestingly, while the dissatisfaction with APM has increased, the rate of daily outages continues to rise, from 8% in 2010 to 10% today.

The question I pose is this – is there something else to be interpreted from this data? I would argue it is not APM as a whole that is dying but rather legacy APM solutions. The increase in daily outages suggests that APM is more important than ever before but that the industry itself isn't keeping up.

Legacy APM systems have several well-documented problems that have lead to user dissatisfaction for years. These products, which require configuration at each component for correct monitoring, come with high costs and long implementation cycles.

For APM to succeed, the industry must focus on deployment efficiency: actual install effort, supporting infrastructure effort including sufficient, scalable server space; initial configuration effort; and maintenance configuration effort. Initial configuration effort must be improved and rules and self-learning should reduce or eliminate maintenance configuration effort.

If these problems disappear, APM tools are much more attractive again. The survey respondents’ complaints about insufficient expertise (50%) and too much time (32%) are effectively mitigated by auto detection and self-learning.

Wittmann also believes that APM tools have failed to keep up with complexity – and that it is too difficult to set up APM tools in a service-oriented design. Again, the common theme here is ease of use. For APM to be truly helpful, the data has to be managed and presented in a way that can be used both without training for novices, and minimal training for expert users (more advanced functions).

APM is not just for developers anymore – and the industry has to adjust accordingly. IT operations, app owners and infrastructure folks need to have understandable and actionable data. In a sense, Wittmann is correct: if you rely on data from siloed monitoring tools (developer specific, web server specific, CPU monitoring, etc.), you won't gather meaningful information.

But he is too broad in his assessment. A transaction-centric approach to APM gives organizations a big-picture view of the interaction between end users, applications, and infrastructure. This view can pinpoint the source of problems quickly because you trace 100% of user transactions.

Wittmann is not wrong that legacy APM tools struggle with the growing complexity in IT, especially in the cloud. But there is reason to be optimistic about the demonstrated potential APM has for contributing to the overall success of complex IT operations. Mission-critical application deployments, and therefore the overall success of a company deploying these apps, depend on it.

ABOUT Tom Batchelor

Tom Batchelor is the Senior Solutions Architect at Correlsense and is responsible for creating innovative solutions geared specifically to the needs of clients. Prior to joining Correlsense, he worked in various pre-sales roles for OpTier and Symantec.

Related Links:

www.correlsense.com

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...