Skip to main content

The End of Net Neutrality Highlights Importance of APM

Mike Heumann

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

The End of Net Neutrality Highlights Importance of APM

Mike Heumann

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...