Skip to main content

Redefining APM

Ishan Mukherjee
New Relic

Application performance monitoring (APM) has historically involved a lot of hunting and educated guesswork. If performance deteriorated, monitoring teams would investigate factors like CPU, RAM and storage availability in hopes of identifying the culprit. This often led to dead ends because the root of performance problems was somewhere else. Disparate data points were often displayed on multiple screens, requiring operators to correlate information manually. And problems that weren't easily identified by infrastructure monitoring were nearly impossible to detect.

APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation

Now, APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation. Instead of requiring operators to constantly query the system about its status, modern observability solutions continually display the state of the system as part of normal operations. Visualizations enable operators to see problems quickly, in some cases even before they manifest themselves in a degraded user experience. In short, traditional APM is reactive while modern approaches are proactive and predictive.

There is a clear demand for APM's insights. According to New Relic's 2023 Observability Forecast, more than half (53%) of survey respondents had deployed APM, a 17% increase year-over-year. Nine in 10 (89%) expected to deploy APM by 2026. The monitoring is working. More than two-thirds (69%) of those who currently deploy APM said their organization's MTTR improved since adopting observability, including 35% who said it improved by 25% or more.

Observability solutions now peer into the deepest recesses of applications, uncovering every factor that may affect performance. These include such new cloud-native variables as the health of software containers, tool- and language-specific characteristics, connectors to external data sources, custom integrations, and application program interfaces.

A Complete Picture

The latest generation of APM tools can trace an intricate web of interconnected services to unmask the threads of communication that tie them together. Auto-discovery identifies new applications and code deployments and automatically incorporates them into the fabric of services being monitored. Machine learning observes the factors that affect the performance of individual applications over time and learns to look for changes that presage a slowdown or outage.

A critical feature of today's solutions is an integrated dashboard that enables operators to view such useful troubleshooting aids as distributed traces — which track interactions within complex systems — alongside APM telemetry. They look for significant incidents that influence performance and continually aggregate log information into clusters that allow patterns to be observed without the need for administrators to search or scan through thousands of log entries. Coordinated timestamps correlate changes in performance with possible causal factors and enable operators to drill down on anomalies for problem detection and resolution.

The result is a view of application performance from both above and below. At the center of the operator view are the metrics that are most critical to the user experience, such as response and load times. Alongside that are summaries of alerts, deployments, service levels and vulnerabilities, which are the most critical factors in diagnosing performance problems.

If a spike in response times is detected, operators can scroll down to look at elements of infrastructure, dependencies, databases, containers and other services. By viewing distributed traces alongside APM telemetry, they can quickly identify the root cause of service issues and navigate to the relevant trace to further investigate the problem. They can even drill into the application code to spot problematic changes and see when they were introduced.

This doesn't mean traditional metrics are no longer needed. They are still a great way to identify common infrastructure problems such as bad memory or a corrupt database table. The difference with redefined APM is that the customer experience is at the center and all the factors that affect it are tied to that crucial metric. The latest solutions also enable rich integrations with third-party solutions as well as connections to the vast collection of APIs, software development kits and tools available in the OpenTelemetry observability framework.

Organizations don't have to worry about their APM solutions becoming obsolete but can focus on what really matters: Delighting users.

Ishan Mukherjee is SVP of Growth at New Relic

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Redefining APM

Ishan Mukherjee
New Relic

Application performance monitoring (APM) has historically involved a lot of hunting and educated guesswork. If performance deteriorated, monitoring teams would investigate factors like CPU, RAM and storage availability in hopes of identifying the culprit. This often led to dead ends because the root of performance problems was somewhere else. Disparate data points were often displayed on multiple screens, requiring operators to correlate information manually. And problems that weren't easily identified by infrastructure monitoring were nearly impossible to detect.

APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation

Now, APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation. Instead of requiring operators to constantly query the system about its status, modern observability solutions continually display the state of the system as part of normal operations. Visualizations enable operators to see problems quickly, in some cases even before they manifest themselves in a degraded user experience. In short, traditional APM is reactive while modern approaches are proactive and predictive.

There is a clear demand for APM's insights. According to New Relic's 2023 Observability Forecast, more than half (53%) of survey respondents had deployed APM, a 17% increase year-over-year. Nine in 10 (89%) expected to deploy APM by 2026. The monitoring is working. More than two-thirds (69%) of those who currently deploy APM said their organization's MTTR improved since adopting observability, including 35% who said it improved by 25% or more.

Observability solutions now peer into the deepest recesses of applications, uncovering every factor that may affect performance. These include such new cloud-native variables as the health of software containers, tool- and language-specific characteristics, connectors to external data sources, custom integrations, and application program interfaces.

A Complete Picture

The latest generation of APM tools can trace an intricate web of interconnected services to unmask the threads of communication that tie them together. Auto-discovery identifies new applications and code deployments and automatically incorporates them into the fabric of services being monitored. Machine learning observes the factors that affect the performance of individual applications over time and learns to look for changes that presage a slowdown or outage.

A critical feature of today's solutions is an integrated dashboard that enables operators to view such useful troubleshooting aids as distributed traces — which track interactions within complex systems — alongside APM telemetry. They look for significant incidents that influence performance and continually aggregate log information into clusters that allow patterns to be observed without the need for administrators to search or scan through thousands of log entries. Coordinated timestamps correlate changes in performance with possible causal factors and enable operators to drill down on anomalies for problem detection and resolution.

The result is a view of application performance from both above and below. At the center of the operator view are the metrics that are most critical to the user experience, such as response and load times. Alongside that are summaries of alerts, deployments, service levels and vulnerabilities, which are the most critical factors in diagnosing performance problems.

If a spike in response times is detected, operators can scroll down to look at elements of infrastructure, dependencies, databases, containers and other services. By viewing distributed traces alongside APM telemetry, they can quickly identify the root cause of service issues and navigate to the relevant trace to further investigate the problem. They can even drill into the application code to spot problematic changes and see when they were introduced.

This doesn't mean traditional metrics are no longer needed. They are still a great way to identify common infrastructure problems such as bad memory or a corrupt database table. The difference with redefined APM is that the customer experience is at the center and all the factors that affect it are tied to that crucial metric. The latest solutions also enable rich integrations with third-party solutions as well as connections to the vast collection of APIs, software development kits and tools available in the OpenTelemetry observability framework.

Organizations don't have to worry about their APM solutions becoming obsolete but can focus on what really matters: Delighting users.

Ishan Mukherjee is SVP of Growth at New Relic

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...