An hour-long outage this Tuesday ground the Internet to a halt after popular Content Delivery Network (CDN) provider, Fastly, experienced a glitch that downed Reddit, Spotify, HBO Max, Shopify, Stripe and the BBC, to name just a few of properties affected.
The error brought down everything from streamers to fintech to news outlets, but fortunately only lasted about an hour, with Fastly calling the issue a "global CDN disruption," indicating that it wasn't relegated to issues at a single data center.
While this outage was frustrating for users who logged on Tuesday morning only to be greeted by a 503 error, it likely had many non-IT folks curious about CDNs and the larger Internet Infrastructure delivering their apps, sites and workflows (at least for an hour). While IT teams understand CDNs and their role in the business-critical apps employees consume, this type of outage highlights the need for end-to-end visibility.
For starters, CDNs are a critical component of the larger Internet. CDN companies operate servers around the globe that connect to improve performance and availability of web services by caching some data as close to the end user as possible. With apps now critically linked to business tasks and productivity, the most popular apps use CDN technology to provide a consistently good experience for all users. For instance, the media content you consume (ie. your New York Times front page) may be cached at a CDN server near you so that it doesn't have to be retrieved from a far-flung server every time you load a web page.
So while a page could take hundreds of milliseconds to load when it's being retrieved from a server on the other side of the world, a CDN can usually start sending the content of a page in less than 25 milliseconds when it's already been cached. This, in part, is how apps have continued to grow more complex without impacting the responsiveness for the end user.
Another way to understand CDNs is in relation to edge computing: in many enterprise contexts, CDNs are the WAN edge.
To help avoid congestion at key points in the network, teams can employ subnets (or VLANs) to help segment traffic at key locations, which can more intelligently (and predictably) route traffic to reduce the load on the larger network. In a similar fashion, enterprises can deploy CDNs that serve external requests directly without impacting the performance of the larger WAN.
So the nutshell-take here is that despite being resilient, the Internet is very much built on optimizing performance for the task at hand, with servers and data centers across the world working together to deliver content to users. While many enterprises strive to go "internet-first" in an attempt to offload the amount of physical hardware their IT teams manage directly, these teams still need visibility into the environments that help route and deliver traffic across the enterprise footprint to ensure end-user experience stays consistent. When issues like this arise, understanding the scope of the outage from an enterprise perspective allows IT teams to identify the impact to their users and business.
Gaining this visibility requires a comprehensive monitoring tool that can take a granular look at the network while at the same time putting minimal impact on network capacity itself.
The Latest
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...
This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...