The End of Net Neutrality Highlights Importance of APM
April 30, 2014

Mike Heumann

Share this

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

Share this

The Latest

May 09, 2024

App sprawl has been a concern for technologists for some time, but it has never presented such a challenge as now. As organizations move to implement generative AI into their applications, it's only going to become more complex ... Observability is a necessary component for understanding the vast amounts of complex data within AI-infused applications, and it must be the centerpiece of an app- and data-centric strategy to truly manage app sprawl ...

May 08, 2024

Fundamentally, investments in digital transformation — often an amorphous budget category for enterprises — have not yielded their anticipated productivity and value ... In the wake of the tsunami of money thrown at digital transformation, most businesses don't actually know what technology they've acquired, or the extent of it, and how it's being used, which is directly tied to how people do their jobs. Now, AI transformation represents the biggest change management challenge organizations will face in the next one to two years ...

May 07, 2024

As businesses focus more and more on uncovering new ways to unlock the value of their data, generative AI (GenAI) is presenting some new opportunities to do so, particularly when it comes to data management and how organizations collect, process, analyze, and derive insights from their assets. In the near future, I expect to see six key ways in which GenAI will reshape our current data management landscape ...

May 06, 2024

The rise of AI is ushering in a new disrupt-or-die era. "Data-ready enterprises that connect and unify broad structured and unstructured data sets into an intelligent data infrastructure are best positioned to win in the age of AI ...

May 02, 2024

A majority (61%) of organizations are forced to evolve or rethink their data and analytics (D&A) operating model because of the impact of disruptive artificial intelligence (AI) technologies, according to a new Gartner survey ...

May 01, 2024

The power of AI, and the increasing importance of GenAI are changing the way people work, teams collaborate, and processes operate ... Gartner identified the top data and analytics (D&A) trends for 2024 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

April 30, 2024

IT and the business are disconnected. Ask the business what IT does and you might hear "they implement infrastructure, write software, and migrate things to cloud," and for some that might be the extent of their knowledge of IT. Similarly, IT might know that the business "markets and sells and develops product," but they may not know what those functions entail beyond the unit they serve the most ...

April 29, 2024

Cloud spending continues to soar. Globally, cloud users spent a mind-boggling $563.6 billion last year on public cloud services, and there's no sign of a slowdown ... CloudZero's State of Cloud Cost Report 2024 found that organizations are still struggling to gain control over their cloud costs and that a lack of visibility is having a significant impact. Among the key findings of the report ...

April 25, 2024

The use of hybrid multicloud models is forecasted to double over the next one to three years as IT decision makers are facing new pressures to modernize IT infrastructures because of drivers like AI, security, and sustainability, according to the Enterprise Cloud Index (ECI) report from Nutanix ...

April 24, 2024

Over the last 20 years Digital Employee Experience has become a necessity for companies committed to digital transformation and improving IT experiences. In fact, by 2025, more than 50% of IT organizations will use digital employee experience to prioritize and measure digital initiative success ...