Redefining APM
October 05, 2023

Ishan Mukherjee
New Relic

Share this

Application performance monitoring (APM) has historically involved a lot of hunting and educated guesswork. If performance deteriorated, monitoring teams would investigate factors like CPU, RAM and storage availability in hopes of identifying the culprit. This often led to dead ends because the root of performance problems was somewhere else. Disparate data points were often displayed on multiple screens, requiring operators to correlate information manually. And problems that weren't easily identified by infrastructure monitoring were nearly impossible to detect.

APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation

Now, APM is being redefined by innovations in performance monitoring and a new perspective that places user experience at the center of the equation. Instead of requiring operators to constantly query the system about its status, modern observability solutions continually display the state of the system as part of normal operations. Visualizations enable operators to see problems quickly, in some cases even before they manifest themselves in a degraded user experience. In short, traditional APM is reactive while modern approaches are proactive and predictive.

There is a clear demand for APM's insights. According to New Relic's 2023 Observability Forecast, more than half (53%) of survey respondents had deployed APM, a 17% increase year-over-year. Nine in 10 (89%) expected to deploy APM by 2026. The monitoring is working. More than two-thirds (69%) of those who currently deploy APM said their organization's MTTR improved since adopting observability, including 35% who said it improved by 25% or more.

Observability solutions now peer into the deepest recesses of applications, uncovering every factor that may affect performance. These include such new cloud-native variables as the health of software containers, tool- and language-specific characteristics, connectors to external data sources, custom integrations, and application program interfaces.

A Complete Picture

The latest generation of APM tools can trace an intricate web of interconnected services to unmask the threads of communication that tie them together. Auto-discovery identifies new applications and code deployments and automatically incorporates them into the fabric of services being monitored. Machine learning observes the factors that affect the performance of individual applications over time and learns to look for changes that presage a slowdown or outage.

A critical feature of today's solutions is an integrated dashboard that enables operators to view such useful troubleshooting aids as distributed traces — which track interactions within complex systems — alongside APM telemetry. They look for significant incidents that influence performance and continually aggregate log information into clusters that allow patterns to be observed without the need for administrators to search or scan through thousands of log entries. Coordinated timestamps correlate changes in performance with possible causal factors and enable operators to drill down on anomalies for problem detection and resolution.

The result is a view of application performance from both above and below. At the center of the operator view are the metrics that are most critical to the user experience, such as response and load times. Alongside that are summaries of alerts, deployments, service levels and vulnerabilities, which are the most critical factors in diagnosing performance problems.

If a spike in response times is detected, operators can scroll down to look at elements of infrastructure, dependencies, databases, containers and other services. By viewing distributed traces alongside APM telemetry, they can quickly identify the root cause of service issues and navigate to the relevant trace to further investigate the problem. They can even drill into the application code to spot problematic changes and see when they were introduced.

This doesn't mean traditional metrics are no longer needed. They are still a great way to identify common infrastructure problems such as bad memory or a corrupt database table. The difference with redefined APM is that the customer experience is at the center and all the factors that affect it are tied to that crucial metric. The latest solutions also enable rich integrations with third-party solutions as well as connections to the vast collection of APIs, software development kits and tools available in the OpenTelemetry observability framework.

Organizations don't have to worry about their APM solutions becoming obsolete but can focus on what really matters: Delighting users.

Ishan Mukherjee is SVP of Growth at New Relic
Share this

The Latest

February 21, 2024

Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...

February 20, 2024

In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...

February 16, 2024

In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...

February 15, 2024

In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...

February 14, 2024

We increasingly see companies using their observability data to support security use cases. It's not entirely surprising given the challenges that organizations have with legacy SIEMs. We wanted to dig into this evolving intersection of security and observability, so we surveyed 500 security professionals — 40% of whom were either CISOs or CSOs — for our inaugural State of Security Observability report ...

February 13, 2024

Cloud computing continues to soar, with little signs of slowing down ... But, as with any new program, companies are seeing substantial benefits in the cloud but are also navigating budgetary challenges. With an estimated 94% of companies using cloud services today, priorities for IT teams have shifted from purely adoption-based to deploying new strategies. As they explore new territories, it can be a struggle to exploit the full value of their spend and the cloud's transformative capabilities ...

February 12, 2024

What will the enterprise of the future look like? If we asked this question three years ago, I doubt most of us would have pictured today as we know it: a future where generative AI has become deeply integrated into business and even our daily lives ...

February 09, 2024

With a focus on GenAI, industry experts offer predictions on how AI will evolve and impact IT and business in 2024. Part 5, the final installment in this series, covers the advantages AI will deliver: Generative AI will become increasingly important for resolving complicated data integration challenges, essentially providing a natural-language intermediary between data endpoints ...

February 08, 2024

With a focus on GenAI, industry experts offer predictions on how AI will evolve and impact IT and business in 2024. Part 4 covers the challenges of AI: In the short term, the rapid development and adoption of AI tools and products leveraging AI services will lead to an increase in biased outputs ...

February 07, 2024

With a focus on GenAI, industry experts offer predictions on how AI will evolve and impact IT and business in 2024. Part 3 covers the technologies that will drive AI: The question on every leader's mind in 2023 was - how soon will I see the return on my AI investment? The answer may lie in quantum computing ...