We don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true.
Start with Looking Back at 2017 APM Predictions - Did They Come True? Part 1, to see which predictions did not come true.
The following predictions were spot on, and outline key shifts in the landscape for 2017:
Confusion around AIOps
GARTNER RENAMED IT, WHICH WAS THE PLAN ALL ALONG
AIOps tools today are not a reality, but hopefully it will happen over time
Any time there is a shift in technologies, where vendors are moving from an older technology concept to a newer one Gartner adapts the market definition. In the case of ITOA, as the core concept was reporting on data, which needed to, and eventually moved towards automated analysis of data via machine learning (ML). At the time of advancements in ML Gartner shifted the definition from ITOA to Algorithmic IT Operations (AIOps). Vendors began adopting and applying these new capabilities, and AIOps was becoming a reality. The next phase is automating these analyses and taking action on the data and insights. Hence Gartner changed it to Artificial Intelligence for IT Operations and expanded the scope significantly. AIOps tools today are not a reality (see reasons above), but hopefully it will happen over time. This shift was always the plan at Gartner, but something which needed to evolve over a couple of years. The adoption of ML has been rapid, but we are a far cry from true AI today, even when vendors claim they may have it. They do not, at least not unless they are IBM, Google, Facebook, or a very small handful of other companies. Most vendors in the IT Operations space are not yet taking advantage of public cloud providers’ AI platforms.
Better predictive analysis and machine learning
This one was spot on, we've seen a speedy adoption of more advanced ML, and better predictive capabilities in most products on the market. Although some vendors have had baselining for over a decade, now all products do some form of baselining in the monitoring space. Much more work is being done to improve capabilities, and it's about time!
APM products increasing scale
BUT STILL LACK MARKET LEADING TIME SERIES FEATURES
In 2017 APM products have begun to scale much more efficiently than in the past (with a couple of exceptions), but there is still a lack of market-leading time-series features in APM products, especially when looking at granular data (second level). There is yet another set of tools used for scalable and well-visualized time series from commercial entities and open source projects. I expect this to change eventually, but for now, we have fragmentation in this area.
APM tools evolve to support serverless
BUT EARLY
This prediction came true in 2017, but defining what "support" of serverless (which I prefer to call FaaS) entails is a nebulous term. Most APM tools support collecting events from the code, which require code changes. Code changes are not ideal for those building or managing FaaS, but that's the current state. FaaS vendors are quite closed in exposing the internals of their systems, and some have provided proprietary methods of tracing them. I predict this opens up in 2-3 years to allow a more automated way of monitoring FaaS.
APM in DevOps Toolchain
AND INCREASING
This one has been true for the last 4+ years in fact, but as toolchains increase in complexity the integration of APM into both CI and CD pipelines continues to mature. In the CI/CD space, more advanced commercial solutions include better integration with APM tools as part of their products. Increased polish is needed, and will continue over the coming years.
Hybrid application management
HAS BEEN TRUE FOR YEARS
Hybrid has been typical for a while now and hence is not a prediction but a historical observation. APM tools running at the application layer have been managing across infrastructure for years, I would guess 8+ years, in fact. Today's applications are increasingly hybrid, meaning they encompass several infrastructures, languages, and frameworks. Due to this diversity, APM is critical in managing highly distributed interconnected applications.
APM + IoT
BUT HAS BEEN HAPPENING FOR YEARS, AND NOW PRODUCTS BEGIN TO EMERGE
The measurement of IoT usage and performance is an accurate prediction, another one which is correct, and became even more real with the launch of several IoT product capabilities within leading APM tools. I began seeing this about three years ago with the connected car and set-top boxes specifically. Since connected cars and set-top boxes have a decent amount of computing resources are instrumented with end-user monitoring (browser/javascript/or other APIs) or the running code on the device are treated as a typical end-user or application component within APM tools. The solution providers of these products who discovered this early were able to offer better and more predictable experiences, via observation. This is the reason specific IoT products were introduced in 2017. Great prediction!
Please provide feedback on my assessment on twitter @jkowall or LinkedIn, and if you enjoyed reading this let me know and I’ll be happy to provide my analysis of the 2018 APMdigest predictions next year!
The Latest
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...
Development teams so often find themselves rushing to get a release out on time. When it comes time for testing, the software works fine in the lab. But, when it's released, customers report a bunch of bugs. How does this happen? Why weren't the flaws found in QA? ...