Skip to main content

The End of Net Neutrality Highlights Importance of APM

Mike Heumann

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

The End of Net Neutrality Highlights Importance of APM

Mike Heumann

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...