Skip to main content

The Case for Application Experience Monitoring

Why ‘"app assurance" is just as (or more) important than APM
Andrew Marshall

For today's software development teams, application performance monitoring (APM) is a fairly ubiquitous technology and an effective tool to monitor how applications are performing in production. The functionality of APM has evolved since it arrived on the scene in the late 90s, with several vendors building monitoring functionality that works well with distributed (i.e. not monolithic) applications. Despite these advances, APM remains at its core a mechanism for Dev teams to track how an application is working at the code and transaction level.

All the customer cares about is how they are enjoying an app.

While this is still useful, it doesn't address the ultimate goal of DevOps teams: to deliver the desired application experience to end users. Code working perfectly doesn't matter much if apps aren't reaching customers, or are negatively impacted by network latency or outages. All the customer cares about is how they are enjoying an app. To effectively guarantee application availability and usage satisfaction, DevOps teams need to leverage three important application assurance data sets into their delivery automation logic:

■ Application user experience: Real User Monitoring (RUM)

■ Real-time infrastructure health status: Synthetic testing

■ IT tool data feeds: Key IT health data like (traditional) APM, local load balancer (LOB) data and cloud metrics

Application User Experience: Real User Monitoring (RUM)

When is an app truly "green"? Answer: when it's working correctly for end users. Real user monitoring (RUM) allows Ops teams to fully understanding how internet performance impacts customer satisfaction and engagement. No matter where an app is hosted — in clouds, data centers, or CDNs — Ops teams need to make sure delivery of these apps looks good from the user perspective. RUM gives teams a real-time understanding of worldwide network health, which in turn delivers the performance data needed to automate app delivery, and ensure the best user experience your application can offer. An end user-centric approach to application assurance is critical to Application Experience Monitoring.

Real-Time Infrastructure Health Status: Synthetic Testing

Modern infrastructure is dynamic, distributed, and heterogeneous in nature. When your delivery architecture is comprised of one or more clouds, data centers, or CDNs, understanding the status of your infrastructure becomes a difficult proposition. It's critical that you test all of your endpoints: in your public clouds, private clouds, data centers, or CDNs. This provides a comprehensive and uniform view of the overall health of your applications delivery, no matter what the status of your various infrastructure components happens to be.

Synthetic testing acts like a virtual end point, testing the throughput of an application, video, or large file download. Being able to test your app from remote locations worldwide helps ensure your data has incredibly low latency, and therefore is actually usable for your app delivery strategy. Healthy infrastructure makes for deliverable apps.

IT Tool Data Feeds

As mentioned, a basic understanding of how an app is performing at the code and transaction level (i.e. traditional APM) is still important. This monitoring data is a key part of the third aspect of application assurance that DevOps teams need to leverage in addition to RUM and infrastructure health: IT Tool Data Feeds. There are various other monitoring and real-time metrics available to IT Ops to help them automate app delivery with the most robust set of data. (Traditional) APM is certainly one of these. Understanding the health of the app code is obviously still useful for making real-time delivery decisions in your software-defined app delivery platform.

On top of that there are many other data sources to leverage, of course, such as: local load balancer (i.e. NGINX, HAProxy) health metrics, cloud status metrics (i.e. AWS Cloudwatch), etc. These are just a few examples. Chances are your business collects data from LOB apps or other mission-critical services that are instrumental to your IT organization. These are tools you're paying (or paid) for, so you should use them for your application delivery automation if they're accessible. They're just as important as traditional APM.

DevOps Requires Insight + Action

DevOps teams are under constant pressure to support continuous deployment, agile methodology, and an acceptable uptime for applications. "Monitoring" isn't a solution, but actually just a way to collect data. Ops teams then use this data to make sure apps are delivered to customers with an optimal experience in mind. When both dev and ops teams have a single lens to view IT health data (from the three sources above) and a set of application delivery rules, they can react quickly to changes in these data feeds to assure the one thing that matters: the application experience by end users. Application Experience Monitoring as a practice helps make this possible.

Once DevOps teams understand how the Application Experience impacts global customers, the next important step is to do something with that information. That's where a software-defined application delivery platform comes in. Leveraging this powerful data set to automate application, video, and website delivery allows Ops teams to "self-heal" when network outages or latency issues happen. Insight plus action is the next step for APM.

The Latest

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 4 covers negative impacts of AI ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 3 covers barriers and challenges for AI ...

The Case for Application Experience Monitoring

Why ‘"app assurance" is just as (or more) important than APM
Andrew Marshall

For today's software development teams, application performance monitoring (APM) is a fairly ubiquitous technology and an effective tool to monitor how applications are performing in production. The functionality of APM has evolved since it arrived on the scene in the late 90s, with several vendors building monitoring functionality that works well with distributed (i.e. not monolithic) applications. Despite these advances, APM remains at its core a mechanism for Dev teams to track how an application is working at the code and transaction level.

All the customer cares about is how they are enjoying an app.

While this is still useful, it doesn't address the ultimate goal of DevOps teams: to deliver the desired application experience to end users. Code working perfectly doesn't matter much if apps aren't reaching customers, or are negatively impacted by network latency or outages. All the customer cares about is how they are enjoying an app. To effectively guarantee application availability and usage satisfaction, DevOps teams need to leverage three important application assurance data sets into their delivery automation logic:

■ Application user experience: Real User Monitoring (RUM)

■ Real-time infrastructure health status: Synthetic testing

■ IT tool data feeds: Key IT health data like (traditional) APM, local load balancer (LOB) data and cloud metrics

Application User Experience: Real User Monitoring (RUM)

When is an app truly "green"? Answer: when it's working correctly for end users. Real user monitoring (RUM) allows Ops teams to fully understanding how internet performance impacts customer satisfaction and engagement. No matter where an app is hosted — in clouds, data centers, or CDNs — Ops teams need to make sure delivery of these apps looks good from the user perspective. RUM gives teams a real-time understanding of worldwide network health, which in turn delivers the performance data needed to automate app delivery, and ensure the best user experience your application can offer. An end user-centric approach to application assurance is critical to Application Experience Monitoring.

Real-Time Infrastructure Health Status: Synthetic Testing

Modern infrastructure is dynamic, distributed, and heterogeneous in nature. When your delivery architecture is comprised of one or more clouds, data centers, or CDNs, understanding the status of your infrastructure becomes a difficult proposition. It's critical that you test all of your endpoints: in your public clouds, private clouds, data centers, or CDNs. This provides a comprehensive and uniform view of the overall health of your applications delivery, no matter what the status of your various infrastructure components happens to be.

Synthetic testing acts like a virtual end point, testing the throughput of an application, video, or large file download. Being able to test your app from remote locations worldwide helps ensure your data has incredibly low latency, and therefore is actually usable for your app delivery strategy. Healthy infrastructure makes for deliverable apps.

IT Tool Data Feeds

As mentioned, a basic understanding of how an app is performing at the code and transaction level (i.e. traditional APM) is still important. This monitoring data is a key part of the third aspect of application assurance that DevOps teams need to leverage in addition to RUM and infrastructure health: IT Tool Data Feeds. There are various other monitoring and real-time metrics available to IT Ops to help them automate app delivery with the most robust set of data. (Traditional) APM is certainly one of these. Understanding the health of the app code is obviously still useful for making real-time delivery decisions in your software-defined app delivery platform.

On top of that there are many other data sources to leverage, of course, such as: local load balancer (i.e. NGINX, HAProxy) health metrics, cloud status metrics (i.e. AWS Cloudwatch), etc. These are just a few examples. Chances are your business collects data from LOB apps or other mission-critical services that are instrumental to your IT organization. These are tools you're paying (or paid) for, so you should use them for your application delivery automation if they're accessible. They're just as important as traditional APM.

DevOps Requires Insight + Action

DevOps teams are under constant pressure to support continuous deployment, agile methodology, and an acceptable uptime for applications. "Monitoring" isn't a solution, but actually just a way to collect data. Ops teams then use this data to make sure apps are delivered to customers with an optimal experience in mind. When both dev and ops teams have a single lens to view IT health data (from the three sources above) and a set of application delivery rules, they can react quickly to changes in these data feeds to assure the one thing that matters: the application experience by end users. Application Experience Monitoring as a practice helps make this possible.

Once DevOps teams understand how the Application Experience impacts global customers, the next important step is to do something with that information. That's where a software-defined application delivery platform comes in. Leveraging this powerful data set to automate application, video, and website delivery allows Ops teams to "self-heal" when network outages or latency issues happen. Insight plus action is the next step for APM.

The Latest

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 4 covers negative impacts of AI ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 3 covers barriers and challenges for AI ...