The End of Net Neutrality Highlights Importance of APM
April 30, 2014

Mike Heumann

Share this

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

Share this

The Latest

June 25, 2020

I've had the opportunity to work with a number of organizations embarking on their AIOps journey. I always advise them to start by evaluating their needs and the possibilities AIOps can bring to them through five different levels of AIOps maturity. This is a strategic approach that allows enterprises to achieve complete automation for long-term success ...

June 24, 2020

Sumo Logic recently commissioned an independent market research study to understand the industry momentum behind continuous intelligence — and the necessity for digital organizations to embrace a cloud-native, real-time continuous intelligence platform to support the speed and agility of business for faster decision-making, optimizing security, driving new innovation and delivering world-class customer experiences. Some of the key findings include ...

June 23, 2020

When it comes to viruses, it's typically those of the computer/digital variety that IT is concerned about. But with the ongoing pandemic, IT operations teams are on the hook to maintain business functions in the midst of rapid and massive change. One of the biggest challenges for businesses is the shift to remote work at scale. Ensuring that they can continue to provide products and services — and satisfy their customers — against this backdrop is challenging for many ...

June 22, 2020

Teams tasked with developing and delivering software are under pressure to balance the business imperative for speed with high customer expectations for quality. In the course of trying to achieve this balance, engineering organizations rely on a variety of tools, techniques and processes. The 2020 State of Software Quality report provides a snapshot of the key challenges organizations encounter when it comes to delivering quality software at speed, as well as how they are approaching these hurdles. This blog introduces its key findings ...

June 18, 2020

For IT teams, run-the-business, commodity areas such as employee help desks, device support and communication platforms are regularly placed in the crosshairs for cost takeout, but these areas are also highly visible to employees. Organizations can improve employee satisfaction and business performance by building unified functions that are measured by employee experience rather than price. This approach will ultimately fund transformation, as well as increase productivity and innovation ...

June 17, 2020

In the agile DevOps framework, there is a vital piece missing; something that previous approaches to application development did well, but has since fallen by the wayside. That is, the post-delivery portion of the toolchain. Without continuous cloud optimization, the CI/CD toolchain still produces massive inefficiencies and overspend ...

June 16, 2020

The COVID-19 pandemic has exponentially accelerated digital transformation projects. To better understand where IT professionals are turning for help, we analyzed the online behaviors of IT decision-makers. Our research found an increase in demand for resources related to APM, microservices and dependence on cloud services ...

June 15, 2020

The rush to the public cloud has now slowed as organizations realized that it is not a "one size fits all" solution. The main issue is the lack of deep visibility into the performance of applications provided by the host. Our own research has recently revealed that 32% of public cloud resources are currently under-utilized, and without proper direction and guidance, this will remain the case ...

June 11, 2020

The global shift to working from home (WFH) enforced by COVID-19 stay-at-home orders has had a massive impact on everyone's working lives, not just in the way they remotely interact with their teams and IT systems, but also in how they spend their working days. With both governments and businesses committed to slowly opening up offices, it's increasingly clear that a high prevalence of remote work will continue throughout 2020 and beyond. This situation begets important questions ...

June 10, 2020
In recent years, with the emergence of newer technologies ranging from the cloud to machine learning, IT modernization has evolved from a replacement of end-of-life infrastructure to an enabler of innovation and business value. It is a complex process that can take months or even years, but a recent survey shows that the effort begins to deliver measurable results almost as soon as an organization executes the first steps on its roadmap ...