Skip to main content

Monitoring Alert: Don't Get Lost in the Clouds

Mehdi Daoudi

The cloud is the technological megatrend of the new millennium, creating ease-of-use, efficiency and velocity for small businesses to large enterprises. But it was never meant to be the only answer for every situation. In the world of digital experience monitoring (DEM) — where the end user experience is paramount — cloud-based nodes, along with a variety of other node types, are used to build a view of the end user's digital experience. But major companies are now depending solely on cloud nodes for DEM. Research from Catchpoint, in addition to real-world customer data, shows this is a mistake.

Bottom line: if you want an accurate view of the end user experience, you can't monitor only from the cloud. And if you're using the cloud to monitor something also based in the cloud (like many customer-facing apps), you're compounding the problem. You can't expect an accurate last mile performance view by measuring a digital service from the same infrastructure in which it's located.

This is akin to the mistake many made in the early days of monitoring: tracking site performance by measuring only from the data center where the site was hosted. That's far too limited a perspective, given the multitude of performance-impacting elements beyond the firewall. Let's take a look at cloud-only monitoring limitations and how to effectively navigate them.

How Cloud-Only Monitoring Can Create Blind Spots

For example, last year a company received alerts that its services were down. After a mad scramble to fix the problem, it discovered the services were fine, and the alerts were caused by an outage on their cloud-based monitoring nodes! The end user experience was untouched. Good news, but also proof of the noise and false positives that can occur when you monitor from only one place, and in particular, from a cloud-only view.

This led to further research. One example was a series of synthetic monitoring tests on a single request to a website hosted on AWS's Washington DC data center. The test was run from cloud-only nodes on AWS, with parallel tests on synthetic monitoring nodes running in traditional internet data center backbone locations. The test was ran starting August 1, 2018 from seven different nodes — the Washington DC AWS data center, three backbone nodes in Washington DC, and three backbone nodes in New York, NY. This consisted of over 1.7 million measurements. Here are the results.


As you can see, the performance (response times) of tests run only from the cloud are faster by a significant margin. The median response time from the AWS node (bottom line, in orange) was 31ms, while the median response time from Level3's Washington DC backbone node was 117ms; and from Verizon's New York backbone node, 167ms. The cloud node measurement alone does not provide a realistic view of how end users are experiencing this particular site, and would lull an operations team into a false sense of security — not the kind of performance gap a retail website wants, particularly while we are in the critical holiday shopping season.

Why is this so? Tests run from the cloud on a cloud-located site enjoy some form of dedicated network connection as well as preferential data routing. Think of it like a VIP's cleared traffic route through a crowded city. This streamlined data path is far afield from that of an average end user, who receives his/her content after a long, circuitous route through ISPs, CDNs, wireless networks and various other pathways.

Applications Not Suitable for Cloud-Only Monitoring

Another way of explaining this: cloud-only monitoring does not track performance along the entire application delivery chain, nor does it provide the diagnostics required to manage that chain. Any single point along that path — ISPs for example — can create problems impacting the end user experience.

Important tracking processes not suitable for cloud-only monitoring may also include:

■ SLA measurements for third-parties along the delivery chain

■ Provider performance testing for services like CDNs, DNS, ad servers

■ Benchmarking for competitors in your industry

■ Network or ISP connectivity issues

■ DNS availability or validation of service

Where Cloud-Only Monitoring Is Beneficial

Of course, it's not all bad news. Cloud monitoring can provide valuable insights for certain applications such as:

■ Determining availability and performance of an application or service from within the cloud infrastructure environment

■ Performing first mile testing without deploying agents in physical locations

■ Testing some of the basic functionality and content of an application

■ Evaluating the latency of cloud providers back to your infrastructure

Conclusion and Best Practices

The key to avoiding the cloud-only DEM trap is to understand that the accuracy of your monitoring strategy depends on how your measurements are taken and from which locations. Cloud-based vantage points can be a valuable piece of the monitoring puzzle, but should not be relied upon as your sole monitoring infrastructure, as they won't be able to track the many network layers comprising the internet.

The answer will most likely be adding a blend of backbone, broadband, ISP, last mile and wireless monitoring. Start where your customers are located and work your way back along the delivery chain. By canvassing all the elements that can impact their experience you'll have the most accurate view of that experience, as well as the best opportunity to preempt performance problems before end users are affected.

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Monitoring Alert: Don't Get Lost in the Clouds

Mehdi Daoudi

The cloud is the technological megatrend of the new millennium, creating ease-of-use, efficiency and velocity for small businesses to large enterprises. But it was never meant to be the only answer for every situation. In the world of digital experience monitoring (DEM) — where the end user experience is paramount — cloud-based nodes, along with a variety of other node types, are used to build a view of the end user's digital experience. But major companies are now depending solely on cloud nodes for DEM. Research from Catchpoint, in addition to real-world customer data, shows this is a mistake.

Bottom line: if you want an accurate view of the end user experience, you can't monitor only from the cloud. And if you're using the cloud to monitor something also based in the cloud (like many customer-facing apps), you're compounding the problem. You can't expect an accurate last mile performance view by measuring a digital service from the same infrastructure in which it's located.

This is akin to the mistake many made in the early days of monitoring: tracking site performance by measuring only from the data center where the site was hosted. That's far too limited a perspective, given the multitude of performance-impacting elements beyond the firewall. Let's take a look at cloud-only monitoring limitations and how to effectively navigate them.

How Cloud-Only Monitoring Can Create Blind Spots

For example, last year a company received alerts that its services were down. After a mad scramble to fix the problem, it discovered the services were fine, and the alerts were caused by an outage on their cloud-based monitoring nodes! The end user experience was untouched. Good news, but also proof of the noise and false positives that can occur when you monitor from only one place, and in particular, from a cloud-only view.

This led to further research. One example was a series of synthetic monitoring tests on a single request to a website hosted on AWS's Washington DC data center. The test was run from cloud-only nodes on AWS, with parallel tests on synthetic monitoring nodes running in traditional internet data center backbone locations. The test was ran starting August 1, 2018 from seven different nodes — the Washington DC AWS data center, three backbone nodes in Washington DC, and three backbone nodes in New York, NY. This consisted of over 1.7 million measurements. Here are the results.


As you can see, the performance (response times) of tests run only from the cloud are faster by a significant margin. The median response time from the AWS node (bottom line, in orange) was 31ms, while the median response time from Level3's Washington DC backbone node was 117ms; and from Verizon's New York backbone node, 167ms. The cloud node measurement alone does not provide a realistic view of how end users are experiencing this particular site, and would lull an operations team into a false sense of security — not the kind of performance gap a retail website wants, particularly while we are in the critical holiday shopping season.

Why is this so? Tests run from the cloud on a cloud-located site enjoy some form of dedicated network connection as well as preferential data routing. Think of it like a VIP's cleared traffic route through a crowded city. This streamlined data path is far afield from that of an average end user, who receives his/her content after a long, circuitous route through ISPs, CDNs, wireless networks and various other pathways.

Applications Not Suitable for Cloud-Only Monitoring

Another way of explaining this: cloud-only monitoring does not track performance along the entire application delivery chain, nor does it provide the diagnostics required to manage that chain. Any single point along that path — ISPs for example — can create problems impacting the end user experience.

Important tracking processes not suitable for cloud-only monitoring may also include:

■ SLA measurements for third-parties along the delivery chain

■ Provider performance testing for services like CDNs, DNS, ad servers

■ Benchmarking for competitors in your industry

■ Network or ISP connectivity issues

■ DNS availability or validation of service

Where Cloud-Only Monitoring Is Beneficial

Of course, it's not all bad news. Cloud monitoring can provide valuable insights for certain applications such as:

■ Determining availability and performance of an application or service from within the cloud infrastructure environment

■ Performing first mile testing without deploying agents in physical locations

■ Testing some of the basic functionality and content of an application

■ Evaluating the latency of cloud providers back to your infrastructure

Conclusion and Best Practices

The key to avoiding the cloud-only DEM trap is to understand that the accuracy of your monitoring strategy depends on how your measurements are taken and from which locations. Cloud-based vantage points can be a valuable piece of the monitoring puzzle, but should not be relied upon as your sole monitoring infrastructure, as they won't be able to track the many network layers comprising the internet.

The answer will most likely be adding a blend of backbone, broadband, ISP, last mile and wireless monitoring. Start where your customers are located and work your way back along the delivery chain. By canvassing all the elements that can impact their experience you'll have the most accurate view of that experience, as well as the best opportunity to preempt performance problems before end users are affected.

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...