Monitoring Alert: Don't Get Lost in the Clouds
November 28, 2018

Mehdi Daoudi
Catchpoint

Share this

The cloud is the technological megatrend of the new millennium, creating ease-of-use, efficiency and velocity for small businesses to large enterprises. But it was never meant to be the only answer for every situation. In the world of digital experience monitoring (DEM) — where the end user experience is paramount — cloud-based nodes, along with a variety of other node types, are used to build a view of the end user's digital experience. But major companies are now depending solely on cloud nodes for DEM. Research from Catchpoint, in addition to real-world customer data, shows this is a mistake.

Bottom line: if you want an accurate view of the end user experience, you can't monitor only from the cloud. And if you're using the cloud to monitor something also based in the cloud (like many customer-facing apps), you're compounding the problem. You can't expect an accurate last mile performance view by measuring a digital service from the same infrastructure in which it's located.

This is akin to the mistake many made in the early days of monitoring: tracking site performance by measuring only from the data center where the site was hosted. That's far too limited a perspective, given the multitude of performance-impacting elements beyond the firewall. Let's take a look at cloud-only monitoring limitations and how to effectively navigate them.

How Cloud-Only Monitoring Can Create Blind Spots

For example, last year a company received alerts that its services were down. After a mad scramble to fix the problem, it discovered the services were fine, and the alerts were caused by an outage on their cloud-based monitoring nodes! The end user experience was untouched. Good news, but also proof of the noise and false positives that can occur when you monitor from only one place, and in particular, from a cloud-only view.

This led to further research. One example was a series of synthetic monitoring tests on a single request to a website hosted on AWS's Washington DC data center. The test was run from cloud-only nodes on AWS, with parallel tests on synthetic monitoring nodes running in traditional internet data center backbone locations. The test was ran starting August 1, 2018 from seven different nodes — the Washington DC AWS data center, three backbone nodes in Washington DC, and three backbone nodes in New York, NY. This consisted of over 1.7 million measurements. Here are the results.


As you can see, the performance (response times) of tests run only from the cloud are faster by a significant margin. The median response time from the AWS node (bottom line, in orange) was 31ms, while the median response time from Level3's Washington DC backbone node was 117ms; and from Verizon's New York backbone node, 167ms. The cloud node measurement alone does not provide a realistic view of how end users are experiencing this particular site, and would lull an operations team into a false sense of security — not the kind of performance gap a retail website wants, particularly while we are in the critical holiday shopping season.

Why is this so? Tests run from the cloud on a cloud-located site enjoy some form of dedicated network connection as well as preferential data routing. Think of it like a VIP's cleared traffic route through a crowded city. This streamlined data path is far afield from that of an average end user, who receives his/her content after a long, circuitous route through ISPs, CDNs, wireless networks and various other pathways.

Applications Not Suitable for Cloud-Only Monitoring

Another way of explaining this: cloud-only monitoring does not track performance along the entire application delivery chain, nor does it provide the diagnostics required to manage that chain. Any single point along that path — ISPs for example — can create problems impacting the end user experience.

Important tracking processes not suitable for cloud-only monitoring may also include:

■ SLA measurements for third-parties along the delivery chain

■ Provider performance testing for services like CDNs, DNS, ad servers

■ Benchmarking for competitors in your industry

■ Network or ISP connectivity issues

■ DNS availability or validation of service

Where Cloud-Only Monitoring Is Beneficial

Of course, it's not all bad news. Cloud monitoring can provide valuable insights for certain applications such as:

■ Determining availability and performance of an application or service from within the cloud infrastructure environment

■ Performing first mile testing without deploying agents in physical locations

■ Testing some of the basic functionality and content of an application

■ Evaluating the latency of cloud providers back to your infrastructure

Conclusion and Best Practices

The key to avoiding the cloud-only DEM trap is to understand that the accuracy of your monitoring strategy depends on how your measurements are taken and from which locations. Cloud-based vantage points can be a valuable piece of the monitoring puzzle, but should not be relied upon as your sole monitoring infrastructure, as they won't be able to track the many network layers comprising the internet.

The answer will most likely be adding a blend of backbone, broadband, ISP, last mile and wireless monitoring. Start where your customers are located and work your way back along the delivery chain. By canvassing all the elements that can impact their experience you'll have the most accurate view of that experience, as well as the best opportunity to preempt performance problems before end users are affected.

Mehdi Daoudi is CEO and Co-Founder of Catchpoint
Share this

The Latest

February 14, 2019

Part 3 of our three-part blog series on the shortcomings of traditional APM solutions for monitoring microservices based applications explains how the alerting and troubleshooting capabilities of traditional APM do not address the evolving requirements of monitoring microservices based applications ...

February 13, 2019

In a digital world where the speed of innovation matters, are you anchored down by legacy APM agents? ...

February 12, 2019

In a digital world where customer experience defines your business, is your APM solution doing its job? This may seem like a strange question to open a technical blog on Application Performance Management (APM), but it's not. With customer experience today largely driven by software, we think there's no more important question to ask ...

February 11, 2019

According to the NetEnrich 2019 Cloud Adoption survey, 68% of enterprise IT departments are using public cloud infrastructure today, and 27% of respondents said that doing so is part of their near-term plan ...

February 08, 2019

Organizations and their IT teams are not in sync when pursuing their digital transformation strategies, according to a new report released today by The Economist Intelligence Unit ...

February 07, 2019

Having the right tools and good visibility are critical to understanding what's going on in your network and applications. However, as networks become more complex and hybrid in nature, organizations can no longer afford to be reactive and rely only on portable diagnostic tools. They need real-time, comprehensive visibility ...

February 06, 2019

When building out new services, SaaS providers need to keep in mind a set of best practices and "habits of success," which cover their organization's culture, relationships with third-party providers and customers, and overall strategic decisions and operational know-how. If you're a SaaS application provider, here are five considerations you need to keep in mind ...

February 05, 2019

In the coming weeks, EMA will be gathering data on what we believe is a unique research topic — approaching DevOps initiatives from the perspectives of all key constituents. We're doing this to try to break through some of the "false walls" created by more niche, market-defined insights, or some of our industry hyperbole. Here are some of the directions we're pursuing ...

February 01, 2019

An application on your network is running slow. Before you even understand what the problem is, the network is blamed for the issue. This puts network teams in a dangerous position — guilty until proven innocent. Even when network teams are sure an issue doesn't stem from a network problem, they are still forced to prove it, spending sometimes significant amounts of time going through troubleshooting processes, looking for a problem that doesn't exist ...

January 31, 2019

Tap and SPAN. It's the same thing, right? That answer would be wrong. Some network engineers may not know the difference, but there are definitely clear and distinct differences between these two types of devices. Understanding these differences will help you elevate your game when it comes to network performance monitoring and application performance monitoring ...