Skip to main content

Looking Back at 2017 APM Predictions - Did They Come True? Part 2

Jonah Kowall

We don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true.

Start with Looking Back at 2017 APM Predictions - Did They Come True? Part 1, to see which predictions did not come true.

The following predictions were spot on, and outline key shifts in the landscape for 2017:

Confusion around AIOps


GARTNER RENAMED IT, WHICH WAS THE PLAN ALL ALONG

AIOps tools today are not a reality, but hopefully it will happen over time

Any time there is a shift in technologies, where vendors are moving from an older technology concept to a newer one Gartner adapts the market definition. In the case of ITOA, as the core concept was reporting on data, which needed to, and eventually moved towards automated analysis of data via machine learning (ML). At the time of advancements in ML Gartner shifted the definition from ITOA to Algorithmic IT Operations (AIOps). Vendors began adopting and applying these new capabilities, and AIOps was becoming a reality. The next phase is automating these analyses and taking action on the data and insights. Hence Gartner changed it to Artificial Intelligence for IT Operations and expanded the scope significantly. AIOps tools today are not a reality (see reasons above), but hopefully it will happen over time. This shift was always the plan at Gartner, but something which needed to evolve over a couple of years. The adoption of ML has been rapid, but we are a far cry from true AI today, even when vendors claim they may have it. They do not, at least not unless they are IBM, Google, Facebook, or a very small handful of other companies. Most vendors in the IT Operations space are not yet taking advantage of public cloud providers’ AI platforms.

Better predictive analysis and machine learning

This one was spot on, we've seen a speedy adoption of more advanced ML, and better predictive capabilities in most products on the market. Although some vendors have had baselining for over a decade, now all products do some form of baselining in the monitoring space. Much more work is being done to improve capabilities, and it's about time!

APM products increasing scale


BUT STILL LACK MARKET LEADING TIME SERIES FEATURES

In 2017 APM products have begun to scale much more efficiently than in the past (with a couple of exceptions), but there is still a lack of market-leading time-series features in APM products, especially when looking at granular data (second level). There is yet another set of tools used for scalable and well-visualized time series from commercial entities and open source projects. I expect this to change eventually, but for now, we have fragmentation in this area.

APM tools evolve to support serverless


BUT EARLY

This prediction came true in 2017, but defining what "support" of serverless (which I prefer to call FaaS) entails is a nebulous term. Most APM tools support collecting events from the code, which require code changes. Code changes are not ideal for those building or managing FaaS, but that's the current state. FaaS vendors are quite closed in exposing the internals of their systems, and some have provided proprietary methods of tracing them. I predict this opens up in 2-3 years to allow a more automated way of monitoring FaaS.

APM in DevOps Toolchain


AND INCREASING

This one has been true for the last 4+ years in fact, but as toolchains increase in complexity the integration of APM into both CI and CD pipelines continues to mature. In the CI/CD space, more advanced commercial solutions include better integration with APM tools as part of their products. Increased polish is needed, and will continue over the coming years.

Hybrid application management


HAS BEEN TRUE FOR YEARS

Hybrid has been typical for a while now and hence is not a prediction but a historical observation. APM tools running at the application layer have been managing across infrastructure for years, I would guess 8+ years, in fact. Today's applications are increasingly hybrid, meaning they encompass several infrastructures, languages, and frameworks. Due to this diversity, APM is critical in managing highly distributed interconnected applications.

APM + IoT


BUT HAS BEEN HAPPENING FOR YEARS, AND NOW PRODUCTS BEGIN TO EMERGE

The measurement of IoT usage and performance is an accurate prediction, another one which is correct, and became even more real with the launch of several IoT product capabilities within leading APM tools. I began seeing this about three years ago with the connected car and set-top boxes specifically. Since connected cars and set-top boxes have a decent amount of computing resources are instrumented with end-user monitoring (browser/javascript/or other APIs) or the running code on the device are treated as a typical end-user or application component within APM tools. The solution providers of these products who discovered this early were able to offer better and more predictable experiences, via observation. This is the reason specific IoT products were introduced in 2017. Great prediction!

Please provide feedback on my assessment on twitter @jkowall or LinkedIn, and if you enjoyed reading this let me know and I’ll be happy to provide my analysis of the 2018 APMdigest predictions next year!

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

Looking Back at 2017 APM Predictions - Did They Come True? Part 2

Jonah Kowall

We don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true.

Start with Looking Back at 2017 APM Predictions - Did They Come True? Part 1, to see which predictions did not come true.

The following predictions were spot on, and outline key shifts in the landscape for 2017:

Confusion around AIOps


GARTNER RENAMED IT, WHICH WAS THE PLAN ALL ALONG

AIOps tools today are not a reality, but hopefully it will happen over time

Any time there is a shift in technologies, where vendors are moving from an older technology concept to a newer one Gartner adapts the market definition. In the case of ITOA, as the core concept was reporting on data, which needed to, and eventually moved towards automated analysis of data via machine learning (ML). At the time of advancements in ML Gartner shifted the definition from ITOA to Algorithmic IT Operations (AIOps). Vendors began adopting and applying these new capabilities, and AIOps was becoming a reality. The next phase is automating these analyses and taking action on the data and insights. Hence Gartner changed it to Artificial Intelligence for IT Operations and expanded the scope significantly. AIOps tools today are not a reality (see reasons above), but hopefully it will happen over time. This shift was always the plan at Gartner, but something which needed to evolve over a couple of years. The adoption of ML has been rapid, but we are a far cry from true AI today, even when vendors claim they may have it. They do not, at least not unless they are IBM, Google, Facebook, or a very small handful of other companies. Most vendors in the IT Operations space are not yet taking advantage of public cloud providers’ AI platforms.

Better predictive analysis and machine learning

This one was spot on, we've seen a speedy adoption of more advanced ML, and better predictive capabilities in most products on the market. Although some vendors have had baselining for over a decade, now all products do some form of baselining in the monitoring space. Much more work is being done to improve capabilities, and it's about time!

APM products increasing scale


BUT STILL LACK MARKET LEADING TIME SERIES FEATURES

In 2017 APM products have begun to scale much more efficiently than in the past (with a couple of exceptions), but there is still a lack of market-leading time-series features in APM products, especially when looking at granular data (second level). There is yet another set of tools used for scalable and well-visualized time series from commercial entities and open source projects. I expect this to change eventually, but for now, we have fragmentation in this area.

APM tools evolve to support serverless


BUT EARLY

This prediction came true in 2017, but defining what "support" of serverless (which I prefer to call FaaS) entails is a nebulous term. Most APM tools support collecting events from the code, which require code changes. Code changes are not ideal for those building or managing FaaS, but that's the current state. FaaS vendors are quite closed in exposing the internals of their systems, and some have provided proprietary methods of tracing them. I predict this opens up in 2-3 years to allow a more automated way of monitoring FaaS.

APM in DevOps Toolchain


AND INCREASING

This one has been true for the last 4+ years in fact, but as toolchains increase in complexity the integration of APM into both CI and CD pipelines continues to mature. In the CI/CD space, more advanced commercial solutions include better integration with APM tools as part of their products. Increased polish is needed, and will continue over the coming years.

Hybrid application management


HAS BEEN TRUE FOR YEARS

Hybrid has been typical for a while now and hence is not a prediction but a historical observation. APM tools running at the application layer have been managing across infrastructure for years, I would guess 8+ years, in fact. Today's applications are increasingly hybrid, meaning they encompass several infrastructures, languages, and frameworks. Due to this diversity, APM is critical in managing highly distributed interconnected applications.

APM + IoT


BUT HAS BEEN HAPPENING FOR YEARS, AND NOW PRODUCTS BEGIN TO EMERGE

The measurement of IoT usage and performance is an accurate prediction, another one which is correct, and became even more real with the launch of several IoT product capabilities within leading APM tools. I began seeing this about three years ago with the connected car and set-top boxes specifically. Since connected cars and set-top boxes have a decent amount of computing resources are instrumented with end-user monitoring (browser/javascript/or other APIs) or the running code on the device are treated as a typical end-user or application component within APM tools. The solution providers of these products who discovered this early were able to offer better and more predictable experiences, via observation. This is the reason specific IoT products were introduced in 2017. Great prediction!

Please provide feedback on my assessment on twitter @jkowall or LinkedIn, and if you enjoyed reading this let me know and I’ll be happy to provide my analysis of the 2018 APMdigest predictions next year!

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...