Skip to main content

How to Boost Network Monitoring Tool Efficiency

Alastair Hartrup

Having the right tools and good visibility are critical to understanding what's going on in your network and applications. However, as networks become more complex and hybrid in nature, organizations can no longer afford to be reactive and rely only on portable diagnostic tools. They need real-time, comprehensive visibility.

To accomplish this, more and more organizations are deploying network monitoring platforms and solutions that utilize TAPs (Terminal Access Points) and Packet Brokers to permanently establish network links and gather critical performance data. These technologies provide maximum utilization of connected tools for IT teams looking for comprehensive monitoring and management in, for example, a Network Performance Monitoring and Diagnostics (NPMD) platform.

Why are TAPs so important? Network TAPs are stand-alone devices that make a mirror copy of all of the traffic that flows between two network end-points (or nodes). This can then be output to various network tools, while the live traffic continues to pass through the network. Because they are independent of the network, they're fully configurable. This allows complex packet manipulation to be performed by network performance (or security) solutions.

Packet Brokers take the technology a step further and allow for the combination, integration, separation, manipulation and processing of inputs from many sources (including TAPs), and then deliver that data to a wide variety of appliance, platform and tool destinations. 

Both play a major role in providing the data necessary for real-time, comprehensive network visibility.

Monitoring tools such as sniffers, probes and NPMD solutions can be permanently and safely installed on all network links using TAPs. They connect in-line on a network link, making a mirror copy of all network traffic and then forward that information directly to a monitoring tool (or Packet Broker). TAPs are also extremely safe – if power is lost, the network traffic will continue to flow. For more complex networks with a variety of connected tools, Packet Brokers are used with TAPs.

What are some of the key features that organizations should look for when deploying TAPs and Packet Brokers? Here are three key features to consider:

1. Flexible Port Mapping

Flexible port mapping allows the user to choose which ports the packets will travel through with no preset requirements. Packets may come in from the network, go back out to the network and also be directed to a connected monitoring tool. Some TAPs require certain ports be used for network traffic and others to be used to support monitoring tools. Flexible Port Mapping allows any port to be utilized for any type of traffic. This eliminates the need to buy a bigger system than necessary just because one type of port is maxed out, while other ports are open and unused. It also makes it simpler to add links and tools when any open port can be utilized for a tool or network access at any time. Not all TAPs and Packet Brokers offer this "scale out" flexibility.

2. Easy Aggregation

Aggregation is the combining of traffic from multiple links and sending that traffic to one specific tool. Often, links are underutilized. A 10 Gbps link, for example, may actually be carrying only 4 Gbps of actual traffic.

Understanding the actual traffic on links and aggregating underutilized links to a single TAP or Packet Broker port can provide dramatic savings on monitoring tools. Doing the math, aggregating five links running at 2 Gbps to a single 10 Gbps output port connected to one monitoring tool can reduce the tool budget by a factor of five.

Imagine the savings opportunity in a large complex network. Using this strategy on hundreds of links, organizations can save hundreds of thousands of dollars.

3. Independent Filtering

Independent filtering eliminates traffic that is not relevant to the mission of the connected monitoring tool. It helps tools run faster, more efficiently and allows them to monitor more links.

Hierarchical filtering is the traditional way that filtering is designed. This can be very complicated and prone to network affecting errors. If packets are filtered out at the top of the list, they cannot be re-introduced later.

Independent fast filtering allows filter maps to be created quickly without consequence to other filters further down the list. Independent filtering is faster and more accurate than hierarchical filtering. Look for TAPs or Packet Brokers that allow you to created multiple filters quickly on any stream with no need to distinguish between ingress and egress ports (and be sure you can create filter criteria with ranges and individual criteria).

When independent filtering is combined with aggregation, packets are filtered out of streams, allowing a higher aggregation ratio of links being sent to a monitoring tool. This means that independent filtering not only helps save OPEX by allowing faster, more accurate tool deployment, it also saves CAPEX by enhancing the link to tool aggregation ratio.

When looking to deploy or optimize your network monitoring solutions, consider the impact of strategically deploying network TAPs and Packet Brokers. Be sure you're using the aforementioned features, as they can offer significant tool cost savings and allow for a more efficient network monitoring solution.

Hot Topics

The Latest

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

How to Boost Network Monitoring Tool Efficiency

Alastair Hartrup

Having the right tools and good visibility are critical to understanding what's going on in your network and applications. However, as networks become more complex and hybrid in nature, organizations can no longer afford to be reactive and rely only on portable diagnostic tools. They need real-time, comprehensive visibility.

To accomplish this, more and more organizations are deploying network monitoring platforms and solutions that utilize TAPs (Terminal Access Points) and Packet Brokers to permanently establish network links and gather critical performance data. These technologies provide maximum utilization of connected tools for IT teams looking for comprehensive monitoring and management in, for example, a Network Performance Monitoring and Diagnostics (NPMD) platform.

Why are TAPs so important? Network TAPs are stand-alone devices that make a mirror copy of all of the traffic that flows between two network end-points (or nodes). This can then be output to various network tools, while the live traffic continues to pass through the network. Because they are independent of the network, they're fully configurable. This allows complex packet manipulation to be performed by network performance (or security) solutions.

Packet Brokers take the technology a step further and allow for the combination, integration, separation, manipulation and processing of inputs from many sources (including TAPs), and then deliver that data to a wide variety of appliance, platform and tool destinations. 

Both play a major role in providing the data necessary for real-time, comprehensive network visibility.

Monitoring tools such as sniffers, probes and NPMD solutions can be permanently and safely installed on all network links using TAPs. They connect in-line on a network link, making a mirror copy of all network traffic and then forward that information directly to a monitoring tool (or Packet Broker). TAPs are also extremely safe – if power is lost, the network traffic will continue to flow. For more complex networks with a variety of connected tools, Packet Brokers are used with TAPs.

What are some of the key features that organizations should look for when deploying TAPs and Packet Brokers? Here are three key features to consider:

1. Flexible Port Mapping

Flexible port mapping allows the user to choose which ports the packets will travel through with no preset requirements. Packets may come in from the network, go back out to the network and also be directed to a connected monitoring tool. Some TAPs require certain ports be used for network traffic and others to be used to support monitoring tools. Flexible Port Mapping allows any port to be utilized for any type of traffic. This eliminates the need to buy a bigger system than necessary just because one type of port is maxed out, while other ports are open and unused. It also makes it simpler to add links and tools when any open port can be utilized for a tool or network access at any time. Not all TAPs and Packet Brokers offer this "scale out" flexibility.

2. Easy Aggregation

Aggregation is the combining of traffic from multiple links and sending that traffic to one specific tool. Often, links are underutilized. A 10 Gbps link, for example, may actually be carrying only 4 Gbps of actual traffic.

Understanding the actual traffic on links and aggregating underutilized links to a single TAP or Packet Broker port can provide dramatic savings on monitoring tools. Doing the math, aggregating five links running at 2 Gbps to a single 10 Gbps output port connected to one monitoring tool can reduce the tool budget by a factor of five.

Imagine the savings opportunity in a large complex network. Using this strategy on hundreds of links, organizations can save hundreds of thousands of dollars.

3. Independent Filtering

Independent filtering eliminates traffic that is not relevant to the mission of the connected monitoring tool. It helps tools run faster, more efficiently and allows them to monitor more links.

Hierarchical filtering is the traditional way that filtering is designed. This can be very complicated and prone to network affecting errors. If packets are filtered out at the top of the list, they cannot be re-introduced later.

Independent fast filtering allows filter maps to be created quickly without consequence to other filters further down the list. Independent filtering is faster and more accurate than hierarchical filtering. Look for TAPs or Packet Brokers that allow you to created multiple filters quickly on any stream with no need to distinguish between ingress and egress ports (and be sure you can create filter criteria with ranges and individual criteria).

When independent filtering is combined with aggregation, packets are filtered out of streams, allowing a higher aggregation ratio of links being sent to a monitoring tool. This means that independent filtering not only helps save OPEX by allowing faster, more accurate tool deployment, it also saves CAPEX by enhancing the link to tool aggregation ratio.

When looking to deploy or optimize your network monitoring solutions, consider the impact of strategically deploying network TAPs and Packet Brokers. Be sure you're using the aforementioned features, as they can offer significant tool cost savings and allow for a more efficient network monitoring solution.

Hot Topics

The Latest

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco