Skip to main content

How a Tap or SPAN Choice Impacts APM

Keith Bromley

For application performance monitoring (APM), many in IT tend to focus a significant amount of their time on the tool that performs the analysis. Unfortunately for them, the battle is won or lost at the data access level. If you don’t have the right data, you can’t fix the problem correctly.

This viewpoint is backed up by an APMdigest post back in August where Jim Frey cited some critical survey research. The research showed that "26% reported that their biggest challenge with incident response is that data exists, but they can’t access or analyze it easily." Key point – you need access to the right data at the right time to solve your problems.

This begs the question — how do I get the right data access?

The best source of data is from a network tap. A tap makes a complete copy of ALL the data passing through it. It is a passive device, so it does not alter any of the data and has a negligible effect on transmission time.

Taps are great because they are "set and forget." You simply plug the device into the network with a one-time disruption and you are done. No programming is required. Best of all, you can place taps anywhere in the network that you need data from — ingress, egress, remote offices, etc.

The one drawback to using taps is that if you install lots of them (which you will want to do), the amount of data feeds can overload the input ports to your APM tools. However, this issue is easily resolved by installing a network packet broker (NPB) to aggregate the data from the taps, filter the data as necessary, and then send that data on to the APM tool. This eliminates the overcrowding of the data ports on your APM tool.

An alternative to a tap is to use a mirroring port (also referred to as a SPAN port) off of your network switches. However, this is not recommended. One reason is that these ports are active devices, i.e. they can materially change data packet characteristics as the packets flow through the device. This is especially important when using data from these ports to diagnose problems.

In addition, bad packets (i.e. malformed packets) are dropped by the SPAN port. This ends up giving you a "digital view" of the situation, i.e. everything is fine and then there is a problem. Missing packets that could show degradation prior to data loss (which could have been useful to create a quicker diagnosis) is missing, along with any context as to what was happening before the problem began.

In the end, optimum data capture can be achieved using a tap and NPB. This results in a faster mean time to repair (MTTR).

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

How a Tap or SPAN Choice Impacts APM

Keith Bromley

For application performance monitoring (APM), many in IT tend to focus a significant amount of their time on the tool that performs the analysis. Unfortunately for them, the battle is won or lost at the data access level. If you don’t have the right data, you can’t fix the problem correctly.

This viewpoint is backed up by an APMdigest post back in August where Jim Frey cited some critical survey research. The research showed that "26% reported that their biggest challenge with incident response is that data exists, but they can’t access or analyze it easily." Key point – you need access to the right data at the right time to solve your problems.

This begs the question — how do I get the right data access?

The best source of data is from a network tap. A tap makes a complete copy of ALL the data passing through it. It is a passive device, so it does not alter any of the data and has a negligible effect on transmission time.

Taps are great because they are "set and forget." You simply plug the device into the network with a one-time disruption and you are done. No programming is required. Best of all, you can place taps anywhere in the network that you need data from — ingress, egress, remote offices, etc.

The one drawback to using taps is that if you install lots of them (which you will want to do), the amount of data feeds can overload the input ports to your APM tools. However, this issue is easily resolved by installing a network packet broker (NPB) to aggregate the data from the taps, filter the data as necessary, and then send that data on to the APM tool. This eliminates the overcrowding of the data ports on your APM tool.

An alternative to a tap is to use a mirroring port (also referred to as a SPAN port) off of your network switches. However, this is not recommended. One reason is that these ports are active devices, i.e. they can materially change data packet characteristics as the packets flow through the device. This is especially important when using data from these ports to diagnose problems.

In addition, bad packets (i.e. malformed packets) are dropped by the SPAN port. This ends up giving you a "digital view" of the situation, i.e. everything is fine and then there is a problem. Missing packets that could show degradation prior to data loss (which could have been useful to create a quicker diagnosis) is missing, along with any context as to what was happening before the problem began.

In the end, optimum data capture can be achieved using a tap and NPB. This results in a faster mean time to repair (MTTR).

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...