Looking Back at 2017 APM Predictions - Did They Come True? Part 1
January 09, 2018

Jonah Kowall
Kentik

Share this

I enjoy the end of the year. Getting some downtime from the constant phone calls and meetings allows me to reflect and plan for a new year ... Planning for a new year often includes predicting what’s going to happen. However, we don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true.

Many of the 2017 predictions were not particularly predictive, but instead "observations" of what is happening and how it will accelerate or shift. Without better ground rules for what a prediction is, it's hard to say what the list is and is not. I have picked out a few key areas which encompass several of these predictions. I'm not calling out the individual predictions, which often do not align with Application Performance Management (APM) trends, but instead, serve a specific vendor (gotta love Marketing).

Review the 2017 APM Predictions on APMdigest

The first four key areas below are the predictions that didn’t come true. In addition to the the ones I have called out here, there were many predictions which have been failing for years that vendors wish would come true. Unfortunately for them, that didn't happen in 2017. Hopefully, they don't post the same prediction for 2018. :)

The following few 2017 predictions had several predictions which fell into a category:

The convergence of infrastructure metrics and APM metrics

Although many tools combine APM and infrastructure metrics, most enterprises still use separate tools due to organizational silos

Although many tools combine APM and infrastructure metrics, most enterprises still use separate tools due to organizational silos. The enterprises which run significant infrastructure continue to deploy monitoring tools for each team — meaning teams running data centers still buy tools for DCIM, network, servers, storage, and other infrastructure technologies. Those implementing cloud infrastructure often use cloud provider tools or other infrastructure monitoring tools which do a better job in those environments, and better handle time series. APM tools, have for a while had lightweight capabilities for infrastructure monitoring, and often may replace more capable infrastructure monitoring technologies, especially when migrating to managed data centers or public cloud.

Adoption of platforms to tie monitoring data together

While there are many products which attempt to tie together data coming from multiple monitoring tools, the integration posts challenges for most. Although some of these products are gaining adoption, it's not impactful to the market today. We haven't seen a shift in this market in 2017, and I doubt it will change in 2018.

AI replacing manual analysis

My perspective on AI is documented. AI is still not prominent in solutions today, even though people have been predicting it for years. The advances in making RCA faster have been helpful to users, but the manual analysis of data continues to be standard today, and likely will be for the foreseeable future. We didn't see a shift in 2017, but as I talk about below, predictive analysis and machine learning are becoming more prominent.

Data-centric to event-centric


BUT WILL BECOME TRUE AT SOME POINT

This one was ahead of its time and is a trend which has been occurring for several years. Instead of event-based systems replacing data based systems, I believe there will be an augmented platform to handle both request-response systems and data-based systems. There are many business reasons to have both capabilities within software architectures, and monitoring will naturally have to evolve to handle both types of software systems. Today most of the "event-based" monitoring systems are done with log analysis tools, as tracking events over time is often a challenge for most monitoring tools. There are some which have been able to show change over time, but not managing of event-based architectures. This will be a continuing trend driven by event-based programming and push models for notifications commonly found in technologies built on top of technology such as Node.js or other event-based frameworks.

Read Looking Back at 2017 APM Predictions - Did They Come True? Part 2, outlining the 2017 APM Predictions that came true.

Jonah Kowall is CTO of Kentik
Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...