Recently, Art Wittmann at InformationWeek claimed that the APM industry is dying. He wrote, “App performance management is seen as less important than it was two years ago, partly because vendors haven’t kept up.” And he was armed with ample data to support his view.
Looking at survey results from hundreds of APM customers, InformationWeek’s data suggests that the high cost and lengthy implementation process of APM is a driving factor in the fall of the industry: insufficient expertise to use the product (50%), high cost (41%), and taking too much staff time to do it right (32%). Interestingly, while the dissatisfaction with APM has increased, the rate of daily outages continues to rise, from 8% in 2010 to 10% today.
The question I pose is this – is there something else to be interpreted from this data? I would argue it is not APM as a whole that is dying but rather legacy APM solutions. The increase in daily outages suggests that APM is more important than ever before but that the industry itself isn't keeping up.
Legacy APM systems have several well-documented problems that have lead to user dissatisfaction for years. These products, which require configuration at each component for correct monitoring, come with high costs and long implementation cycles.
For APM to succeed, the industry must focus on deployment efficiency: actual install effort, supporting infrastructure effort including sufficient, scalable server space; initial configuration effort; and maintenance configuration effort. Initial configuration effort must be improved and rules and self-learning should reduce or eliminate maintenance configuration effort.
If these problems disappear, APM tools are much more attractive again. The survey respondents’ complaints about insufficient expertise (50%) and too much time (32%) are effectively mitigated by auto detection and self-learning.
Wittmann also believes that APM tools have failed to keep up with complexity – and that it is too difficult to set up APM tools in a service-oriented design. Again, the common theme here is ease of use. For APM to be truly helpful, the data has to be managed and presented in a way that can be used both without training for novices, and minimal training for expert users (more advanced functions).
APM is not just for developers anymore – and the industry has to adjust accordingly. IT operations, app owners and infrastructure folks need to have understandable and actionable data. In a sense, Wittmann is correct: if you rely on data from siloed monitoring tools (developer specific, web server specific, CPU monitoring, etc.), you won't gather meaningful information.
But he is too broad in his assessment. A transaction-centric approach to APM gives organizations a big-picture view of the interaction between end users, applications, and infrastructure. This view can pinpoint the source of problems quickly because you trace 100% of user transactions.
Wittmann is not wrong that legacy APM tools struggle with the growing complexity in IT, especially in the cloud. But there is reason to be optimistic about the demonstrated potential APM has for contributing to the overall success of complex IT operations. Mission-critical application deployments, and therefore the overall success of a company deploying these apps, depend on it.
ABOUT Tom Batchelor
Tom Batchelor is the Senior Solutions Architect at Correlsense and is responsible for creating innovative solutions geared specifically to the needs of clients. Prior to joining Correlsense, he worked in various pre-sales roles for OpTier and Symantec.
Related Links:
The Latest
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...
Development teams so often find themselves rushing to get a release out on time. When it comes time for testing, the software works fine in the lab. But, when it's released, customers report a bunch of bugs. How does this happen? Why weren't the flaws found in QA? ...