Skip to main content

30 Ways APM Should Evolve - Part 2

APMdigest asked the top minds in the industry what they feel is the most important way Application Performance Management (APM) tools must evolve. The recommendations on this list provide a rare look into the long-term future of APM technology. Part 2 covers the evolution of the relationship between APM and analytics.

Start with 30 Ways APM Should Evolve - Part 1

6. INTEGRATION WITH ANALYTICS

The future evolution of APM solutions will depend on well chosen integrations — both to inform APM data sets, as well as to enrich to other environments, such as those associated with advanced IT analytics for performance and capacity optimization, IT service management for change management, governance and workflow, and with business analytic capabilities to optimize business and IT service outcomes. APM's real value comes from not trying to be the center of the new world order, but from becoming a central player in enabling advanced service delivery and optimization in the digital age.
Dennis Drogseth
VP of Research, Enterprise Management Associates (EMA)

More than ever, APM tools need to go beyond the application level. In today's world of multi-tier, multi-layer, multi-component distributed systems, performance is determined by so many factors that good tools need to capture a cohesive view of all of them, but at the same time prevent the user from drowning in information overflow. That requires smart and intelligent tools that can identify what matters, not just dumb, data-gathering engines with fancy looking (but otherwise useless) UIs.
Sven Dummer
Senior Director of Product Marketing, Loggly

7. FOCUS ON CHANGE

Traditionally, APM tools are focused on early detection of symptoms of performance and availability issues. The goal is to detect them before they develop into an incident. Such approach is limited as it requires some indicators or patterns of abnormal system behavior. This means that on one hand an issue starts developing while on another, early indicators are frequently very difficult to link to the actual root cause of the issue. It is widely known in the industry that the majority of performance and availability issues are caused by changes. Focusing on analysis of actual changes as a true root cause in addition to early indicators, the APM tools will significantly improve their ability to prevent issues and minimize manual investigation linking symptoms to the root cause triggering them.
Sasha Gilenson
CEO, Evolven

8. CORRELATE LOGS AND METRICS

Application Performance Management (APM) has been around a long time, but the digital transformation that's happening today across industries and organizations of all sizes, is really becoming the key driver for evolution in this space. Traditional APM tools for monitoring provide limited analytics, create siloed views and are inadequate for effectively managing today's modern multi-tier and distributed micro-services based applications. Having real time access to the complete picture dramatically helps businesses of all sizes continuously build, run and secure modern applications. As such, the modern-day APM solution has evolved to require a more comprehensive approach that includes a unified approach for log and metric data — tying together the two most critical sources/KPIs when tracking application performance. With the right technology, correlating log and metrics data is instant, contextual and comprehensive, opening up a rich universe of opportunity that spans the full application lifecycle — from code through the entire CI/CD process/tools to end-user behaviors.
Ramin Sayar
CEO, Sumo Logic

9. INTEGRATED PERFORMANCE AND CAPACITY MANAGEMENT

In the long term, Application Performance Management (APM) tools need to continue their evolution towards becoming integrated performance and capacity management platforms, using advanced analytics to detect performance issues, attribute cause to either problem or demand load, and facilitate repair or infrastructure modifications, respectively. Toward this goal, shorter-term advances should leverage machine learning-based technology to automate the incident detection and attribution functions. Longer term, the adoption of prescriptive analytics combined with Infrastructure as Code (IaC) promises to enable smart, cost-efficient, infrastructure provisioning to accommodate varying or increasing demand.
Mike Paquette
VP, Products, Prelert

10. DATA FROM MULTIPLE SOURCES

APM tools must adapt to the proliferation of monitoring products and general complexity in the average enterprise. Those that can aggregate data from ANY source via a Common Alert Format (whilst stripping out the "noise", de-duplicating, enriching, normalizing) and present this data coherently back the business for more effective correlation of technical issues to business impact shall prevail!
Grant Glading
Sales & Marketing Director, Interlink Software

Read 30 Ways APM Should Evolve - Part 3, covering the expanding scope of APM tools.

Hot Topics

The Latest

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

30 Ways APM Should Evolve - Part 2

APMdigest asked the top minds in the industry what they feel is the most important way Application Performance Management (APM) tools must evolve. The recommendations on this list provide a rare look into the long-term future of APM technology. Part 2 covers the evolution of the relationship between APM and analytics.

Start with 30 Ways APM Should Evolve - Part 1

6. INTEGRATION WITH ANALYTICS

The future evolution of APM solutions will depend on well chosen integrations — both to inform APM data sets, as well as to enrich to other environments, such as those associated with advanced IT analytics for performance and capacity optimization, IT service management for change management, governance and workflow, and with business analytic capabilities to optimize business and IT service outcomes. APM's real value comes from not trying to be the center of the new world order, but from becoming a central player in enabling advanced service delivery and optimization in the digital age.
Dennis Drogseth
VP of Research, Enterprise Management Associates (EMA)

More than ever, APM tools need to go beyond the application level. In today's world of multi-tier, multi-layer, multi-component distributed systems, performance is determined by so many factors that good tools need to capture a cohesive view of all of them, but at the same time prevent the user from drowning in information overflow. That requires smart and intelligent tools that can identify what matters, not just dumb, data-gathering engines with fancy looking (but otherwise useless) UIs.
Sven Dummer
Senior Director of Product Marketing, Loggly

7. FOCUS ON CHANGE

Traditionally, APM tools are focused on early detection of symptoms of performance and availability issues. The goal is to detect them before they develop into an incident. Such approach is limited as it requires some indicators or patterns of abnormal system behavior. This means that on one hand an issue starts developing while on another, early indicators are frequently very difficult to link to the actual root cause of the issue. It is widely known in the industry that the majority of performance and availability issues are caused by changes. Focusing on analysis of actual changes as a true root cause in addition to early indicators, the APM tools will significantly improve their ability to prevent issues and minimize manual investigation linking symptoms to the root cause triggering them.
Sasha Gilenson
CEO, Evolven

8. CORRELATE LOGS AND METRICS

Application Performance Management (APM) has been around a long time, but the digital transformation that's happening today across industries and organizations of all sizes, is really becoming the key driver for evolution in this space. Traditional APM tools for monitoring provide limited analytics, create siloed views and are inadequate for effectively managing today's modern multi-tier and distributed micro-services based applications. Having real time access to the complete picture dramatically helps businesses of all sizes continuously build, run and secure modern applications. As such, the modern-day APM solution has evolved to require a more comprehensive approach that includes a unified approach for log and metric data — tying together the two most critical sources/KPIs when tracking application performance. With the right technology, correlating log and metrics data is instant, contextual and comprehensive, opening up a rich universe of opportunity that spans the full application lifecycle — from code through the entire CI/CD process/tools to end-user behaviors.
Ramin Sayar
CEO, Sumo Logic

9. INTEGRATED PERFORMANCE AND CAPACITY MANAGEMENT

In the long term, Application Performance Management (APM) tools need to continue their evolution towards becoming integrated performance and capacity management platforms, using advanced analytics to detect performance issues, attribute cause to either problem or demand load, and facilitate repair or infrastructure modifications, respectively. Toward this goal, shorter-term advances should leverage machine learning-based technology to automate the incident detection and attribution functions. Longer term, the adoption of prescriptive analytics combined with Infrastructure as Code (IaC) promises to enable smart, cost-efficient, infrastructure provisioning to accommodate varying or increasing demand.
Mike Paquette
VP, Products, Prelert

10. DATA FROM MULTIPLE SOURCES

APM tools must adapt to the proliferation of monitoring products and general complexity in the average enterprise. Those that can aggregate data from ANY source via a Common Alert Format (whilst stripping out the "noise", de-duplicating, enriching, normalizing) and present this data coherently back the business for more effective correlation of technical issues to business impact shall prevail!
Grant Glading
Sales & Marketing Director, Interlink Software

Read 30 Ways APM Should Evolve - Part 3, covering the expanding scope of APM tools.

Hot Topics

The Latest

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...