Skip to main content

Monitoring Alert: Don't Get Lost in the Clouds

Mehdi Daoudi
Catchpoint

The cloud is the technological megatrend of the new millennium, creating ease-of-use, efficiency and velocity for small businesses to large enterprises. But it was never meant to be the only answer for every situation. In the world of digital experience monitoring (DEM) — where the end user experience is paramount — cloud-based nodes, along with a variety of other node types, are used to build a view of the end user's digital experience. But major companies are now depending solely on cloud nodes for DEM. Research from Catchpoint, in addition to real-world customer data, shows this is a mistake.

Bottom line: if you want an accurate view of the end user experience, you can't monitor only from the cloud. And if you're using the cloud to monitor something also based in the cloud (like many customer-facing apps), you're compounding the problem. You can't expect an accurate last mile performance view by measuring a digital service from the same infrastructure in which it's located.

This is akin to the mistake many made in the early days of monitoring: tracking site performance by measuring only from the data center where the site was hosted. That's far too limited a perspective, given the multitude of performance-impacting elements beyond the firewall. Let's take a look at cloud-only monitoring limitations and how to effectively navigate them.

How Cloud-Only Monitoring Can Create Blind Spots

For example, last year a company received alerts that its services were down. After a mad scramble to fix the problem, it discovered the services were fine, and the alerts were caused by an outage on their cloud-based monitoring nodes! The end user experience was untouched. Good news, but also proof of the noise and false positives that can occur when you monitor from only one place, and in particular, from a cloud-only view.

This led to further research. One example was a series of synthetic monitoring tests on a single request to a website hosted on AWS's Washington DC data center. The test was run from cloud-only nodes on AWS, with parallel tests on synthetic monitoring nodes running in traditional internet data center backbone locations. The test was ran starting August 1, 2018 from seven different nodes — the Washington DC AWS data center, three backbone nodes in Washington DC, and three backbone nodes in New York, NY. This consisted of over 1.7 million measurements. Here are the results.


As you can see, the performance (response times) of tests run only from the cloud are faster by a significant margin. The median response time from the AWS node (bottom line, in orange) was 31ms, while the median response time from Level3's Washington DC backbone node was 117ms; and from Verizon's New York backbone node, 167ms. The cloud node measurement alone does not provide a realistic view of how end users are experiencing this particular site, and would lull an operations team into a false sense of security — not the kind of performance gap a retail website wants, particularly while we are in the critical holiday shopping season.

Why is this so? Tests run from the cloud on a cloud-located site enjoy some form of dedicated network connection as well as preferential data routing. Think of it like a VIP's cleared traffic route through a crowded city. This streamlined data path is far afield from that of an average end user, who receives his/her content after a long, circuitous route through ISPs, CDNs, wireless networks and various other pathways.

Applications Not Suitable for Cloud-Only Monitoring

Another way of explaining this: cloud-only monitoring does not track performance along the entire application delivery chain, nor does it provide the diagnostics required to manage that chain. Any single point along that path — ISPs for example — can create problems impacting the end user experience.

Important tracking processes not suitable for cloud-only monitoring may also include:

■ SLA measurements for third-parties along the delivery chain

■ Provider performance testing for services like CDNs, DNS, ad servers

■ Benchmarking for competitors in your industry

■ Network or ISP connectivity issues

■ DNS availability or validation of service

Where Cloud-Only Monitoring Is Beneficial

Of course, it's not all bad news. Cloud monitoring can provide valuable insights for certain applications such as:

■ Determining availability and performance of an application or service from within the cloud infrastructure environment

■ Performing first mile testing without deploying agents in physical locations

■ Testing some of the basic functionality and content of an application

■ Evaluating the latency of cloud providers back to your infrastructure

Conclusion and Best Practices

The key to avoiding the cloud-only DEM trap is to understand that the accuracy of your monitoring strategy depends on how your measurements are taken and from which locations. Cloud-based vantage points can be a valuable piece of the monitoring puzzle, but should not be relied upon as your sole monitoring infrastructure, as they won't be able to track the many network layers comprising the internet.

The answer will most likely be adding a blend of backbone, broadband, ISP, last mile and wireless monitoring. Start where your customers are located and work your way back along the delivery chain. By canvassing all the elements that can impact their experience you'll have the most accurate view of that experience, as well as the best opportunity to preempt performance problems before end users are affected.

Mehdi Daoudi is CEO and Co-Founder of Catchpoint

Hot Topics

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Monitoring Alert: Don't Get Lost in the Clouds

Mehdi Daoudi
Catchpoint

The cloud is the technological megatrend of the new millennium, creating ease-of-use, efficiency and velocity for small businesses to large enterprises. But it was never meant to be the only answer for every situation. In the world of digital experience monitoring (DEM) — where the end user experience is paramount — cloud-based nodes, along with a variety of other node types, are used to build a view of the end user's digital experience. But major companies are now depending solely on cloud nodes for DEM. Research from Catchpoint, in addition to real-world customer data, shows this is a mistake.

Bottom line: if you want an accurate view of the end user experience, you can't monitor only from the cloud. And if you're using the cloud to monitor something also based in the cloud (like many customer-facing apps), you're compounding the problem. You can't expect an accurate last mile performance view by measuring a digital service from the same infrastructure in which it's located.

This is akin to the mistake many made in the early days of monitoring: tracking site performance by measuring only from the data center where the site was hosted. That's far too limited a perspective, given the multitude of performance-impacting elements beyond the firewall. Let's take a look at cloud-only monitoring limitations and how to effectively navigate them.

How Cloud-Only Monitoring Can Create Blind Spots

For example, last year a company received alerts that its services were down. After a mad scramble to fix the problem, it discovered the services were fine, and the alerts were caused by an outage on their cloud-based monitoring nodes! The end user experience was untouched. Good news, but also proof of the noise and false positives that can occur when you monitor from only one place, and in particular, from a cloud-only view.

This led to further research. One example was a series of synthetic monitoring tests on a single request to a website hosted on AWS's Washington DC data center. The test was run from cloud-only nodes on AWS, with parallel tests on synthetic monitoring nodes running in traditional internet data center backbone locations. The test was ran starting August 1, 2018 from seven different nodes — the Washington DC AWS data center, three backbone nodes in Washington DC, and three backbone nodes in New York, NY. This consisted of over 1.7 million measurements. Here are the results.


As you can see, the performance (response times) of tests run only from the cloud are faster by a significant margin. The median response time from the AWS node (bottom line, in orange) was 31ms, while the median response time from Level3's Washington DC backbone node was 117ms; and from Verizon's New York backbone node, 167ms. The cloud node measurement alone does not provide a realistic view of how end users are experiencing this particular site, and would lull an operations team into a false sense of security — not the kind of performance gap a retail website wants, particularly while we are in the critical holiday shopping season.

Why is this so? Tests run from the cloud on a cloud-located site enjoy some form of dedicated network connection as well as preferential data routing. Think of it like a VIP's cleared traffic route through a crowded city. This streamlined data path is far afield from that of an average end user, who receives his/her content after a long, circuitous route through ISPs, CDNs, wireless networks and various other pathways.

Applications Not Suitable for Cloud-Only Monitoring

Another way of explaining this: cloud-only monitoring does not track performance along the entire application delivery chain, nor does it provide the diagnostics required to manage that chain. Any single point along that path — ISPs for example — can create problems impacting the end user experience.

Important tracking processes not suitable for cloud-only monitoring may also include:

■ SLA measurements for third-parties along the delivery chain

■ Provider performance testing for services like CDNs, DNS, ad servers

■ Benchmarking for competitors in your industry

■ Network or ISP connectivity issues

■ DNS availability or validation of service

Where Cloud-Only Monitoring Is Beneficial

Of course, it's not all bad news. Cloud monitoring can provide valuable insights for certain applications such as:

■ Determining availability and performance of an application or service from within the cloud infrastructure environment

■ Performing first mile testing without deploying agents in physical locations

■ Testing some of the basic functionality and content of an application

■ Evaluating the latency of cloud providers back to your infrastructure

Conclusion and Best Practices

The key to avoiding the cloud-only DEM trap is to understand that the accuracy of your monitoring strategy depends on how your measurements are taken and from which locations. Cloud-based vantage points can be a valuable piece of the monitoring puzzle, but should not be relied upon as your sole monitoring infrastructure, as they won't be able to track the many network layers comprising the internet.

The answer will most likely be adding a blend of backbone, broadband, ISP, last mile and wireless monitoring. Start where your customers are located and work your way back along the delivery chain. By canvassing all the elements that can impact their experience you'll have the most accurate view of that experience, as well as the best opportunity to preempt performance problems before end users are affected.

Mehdi Daoudi is CEO and Co-Founder of Catchpoint

Hot Topics

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...