Skip to main content

5 Mistakes to Avoid When Deploying Packet Brokers

Alastair Hartrup

In order to achieve continuous visibility and control of today's complex networks, organizations rely on specialized monitoring and security tools that are connected to live links. In my last blog, I discussed how TAPs facilitate that failsafe connection. In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. This includes exploring some of the common mistakes engineers should avoid when utilizing the more sophisticated Packet Broker features. Being aware of these issues can help network managers implement an efficient visibility architecture and avoid errors that could adversely affect network monitoring, performance and ultimately business operations.

Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them:

1. Don't Mistake a Packet Broker for a TAP

TAPs are relatively simple devices that are often confused with Packet Brokers. Both TAPs and Packer Brokers provide tool connectivity and have similar feature sets. However, TAPs provide failsafe network ports. These ports have copper relays or optical splitters that will keep network traffic flowing even if power is lost to the TAP. Packet Brokers generally do not have failsafe network ports. Therefore, it's important to make the initial network connections using TAPs and send the traffic through to the Packet Broker for management.

There are some combination TAP/Packet Brokers on the market that provide failsafe network connections and Packet Broker features. These combo (or Hybrid) units can save space and money depending on network size, complexity, and ports needed.

2. Buying New Monitoring Tools When New Links are Too Fast for Older Equipment

With ever-increasing bandwidth demands on networks, new links are often moving from copper connections (10Mbps to 100Mbps) to optical fiber (1Gbps), or from lower-speed fiber (1Gbps) to high-speed fiber (10Gbps – 100Gbps). Changing link media does not necessarily require replacing all legacy monitoring tools. Packet Brokers provide load balancing features that allow high-speed network links to evenly distribute the traffic among a number of lower-speed tools.

For example, an incoming network connection at 40Gbps can be connected to a Packet Broker and distributed through output/tool ports to five monitoring devices with a maximum processing capacity of 8Gbps each. This feature allows network managers to save CAPEX on monitoring tools while keeping pace with faster networking speeds.

3. Not Using a Packet Broker for In-Line Security Tools

Many security tools require in-line access to links, meaning that live traffic passes through the tool and back into the network. There are many TAPs that provide in-line access so the tool can have real-time control over live traffic. These TAPs protect live links through an active bypass function that keeps network traffic flowing even if the security tool is taken offline.

In complex networks, managers may be tempted to use multiple independent TAPs for in-line security tools, and Packet Brokers only to manage passive monitoring tools. Packet Brokers, however, can pass real-time traffic delivered through in-line TAPs. This allows the Packet Broker to manage both in-line security and passive monitoring tools through one central device, simplifying deployment of all connected tools.

4. Packet Slicing is Not Packet Manipulation

Packet Slicing is a Packet Broker feature that removes the payload from a packet before it arrives at the monitoring tool. This is done when only packet header information is required by the monitoring tool. Packet slicing can be an efficiency feature that allows the monitoring tool to work faster. It's also an important feature for privacy and legal compliance when monitoring equipment shouldn't have access to actual payload data. Accurate traffic monitoring, however, often requires visibility to the entire packet size in order to accurately capture and report on packet size and time through the network.

There are Packet Brokers that provide packet manipulation, which is similar to slicing, but more complex and more accurate for traffic monitoring and planning. This is done by replacing payload information with random 1's and 0's rather than simply removing the payload. Packet manipulation provides privacy compliance, accurate traffic management and a wider range of user-defined options for traffic analysis.

5. Not Planning for Scale

When designing a visibility architecture, it's critical that future needs be included in the plan. Plan A is purchasing more equipment and ports than initially required to be sure that future capacity for new links and tools is built into the initial plan. Plan B is to purchase only what is needed for today and worry about future needs and budget when the time comes.

However the best plan, Plan C, is to carefully evaluate all Packet Broker equipment options to build extensibility into the plan without breaking the budget. Some Packet Brokers offer scale-out options that allow the purchase of smaller initial units for immediate needs and provide extension units for future growth. This plan allows immediate budgetary savings and provides for growth by simple add-on rather than replacement of older equipment.

Monitoring tools were once used primarily for ad hoc diagnostics, but as networks advance and evolve, these solutions are now permanent additions that deliver vital information for today's modern digital businesses. Trends around BYOD, IoT, social media, and more, are increasing network traffic and malicious activity, making it harder to ensure performance and secure users. Understanding the role of a TAP and Packet Broker — and what mistakes to avoid when deploying them — will allow you to create a flexible visibility architecture that meets the needs of IT, while saving time and money.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

5 Mistakes to Avoid When Deploying Packet Brokers

Alastair Hartrup

In order to achieve continuous visibility and control of today's complex networks, organizations rely on specialized monitoring and security tools that are connected to live links. In my last blog, I discussed how TAPs facilitate that failsafe connection. In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. This includes exploring some of the common mistakes engineers should avoid when utilizing the more sophisticated Packet Broker features. Being aware of these issues can help network managers implement an efficient visibility architecture and avoid errors that could adversely affect network monitoring, performance and ultimately business operations.

Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them:

1. Don't Mistake a Packet Broker for a TAP

TAPs are relatively simple devices that are often confused with Packet Brokers. Both TAPs and Packer Brokers provide tool connectivity and have similar feature sets. However, TAPs provide failsafe network ports. These ports have copper relays or optical splitters that will keep network traffic flowing even if power is lost to the TAP. Packet Brokers generally do not have failsafe network ports. Therefore, it's important to make the initial network connections using TAPs and send the traffic through to the Packet Broker for management.

There are some combination TAP/Packet Brokers on the market that provide failsafe network connections and Packet Broker features. These combo (or Hybrid) units can save space and money depending on network size, complexity, and ports needed.

2. Buying New Monitoring Tools When New Links are Too Fast for Older Equipment

With ever-increasing bandwidth demands on networks, new links are often moving from copper connections (10Mbps to 100Mbps) to optical fiber (1Gbps), or from lower-speed fiber (1Gbps) to high-speed fiber (10Gbps – 100Gbps). Changing link media does not necessarily require replacing all legacy monitoring tools. Packet Brokers provide load balancing features that allow high-speed network links to evenly distribute the traffic among a number of lower-speed tools.

For example, an incoming network connection at 40Gbps can be connected to a Packet Broker and distributed through output/tool ports to five monitoring devices with a maximum processing capacity of 8Gbps each. This feature allows network managers to save CAPEX on monitoring tools while keeping pace with faster networking speeds.

3. Not Using a Packet Broker for In-Line Security Tools

Many security tools require in-line access to links, meaning that live traffic passes through the tool and back into the network. There are many TAPs that provide in-line access so the tool can have real-time control over live traffic. These TAPs protect live links through an active bypass function that keeps network traffic flowing even if the security tool is taken offline.

In complex networks, managers may be tempted to use multiple independent TAPs for in-line security tools, and Packet Brokers only to manage passive monitoring tools. Packet Brokers, however, can pass real-time traffic delivered through in-line TAPs. This allows the Packet Broker to manage both in-line security and passive monitoring tools through one central device, simplifying deployment of all connected tools.

4. Packet Slicing is Not Packet Manipulation

Packet Slicing is a Packet Broker feature that removes the payload from a packet before it arrives at the monitoring tool. This is done when only packet header information is required by the monitoring tool. Packet slicing can be an efficiency feature that allows the monitoring tool to work faster. It's also an important feature for privacy and legal compliance when monitoring equipment shouldn't have access to actual payload data. Accurate traffic monitoring, however, often requires visibility to the entire packet size in order to accurately capture and report on packet size and time through the network.

There are Packet Brokers that provide packet manipulation, which is similar to slicing, but more complex and more accurate for traffic monitoring and planning. This is done by replacing payload information with random 1's and 0's rather than simply removing the payload. Packet manipulation provides privacy compliance, accurate traffic management and a wider range of user-defined options for traffic analysis.

5. Not Planning for Scale

When designing a visibility architecture, it's critical that future needs be included in the plan. Plan A is purchasing more equipment and ports than initially required to be sure that future capacity for new links and tools is built into the initial plan. Plan B is to purchase only what is needed for today and worry about future needs and budget when the time comes.

However the best plan, Plan C, is to carefully evaluate all Packet Broker equipment options to build extensibility into the plan without breaking the budget. Some Packet Brokers offer scale-out options that allow the purchase of smaller initial units for immediate needs and provide extension units for future growth. This plan allows immediate budgetary savings and provides for growth by simple add-on rather than replacement of older equipment.

Monitoring tools were once used primarily for ad hoc diagnostics, but as networks advance and evolve, these solutions are now permanent additions that deliver vital information for today's modern digital businesses. Trends around BYOD, IoT, social media, and more, are increasing network traffic and malicious activity, making it harder to ensure performance and secure users. Understanding the role of a TAP and Packet Broker — and what mistakes to avoid when deploying them — will allow you to create a flexible visibility architecture that meets the needs of IT, while saving time and money.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...