5 Mistakes to Avoid When Deploying Packet Brokers
September 04, 2019

Alastair Hartrup
Network Critical

Share this

In order to achieve continuous visibility and control of today's complex networks, organizations rely on specialized monitoring and security tools that are connected to live links. In my last blog, I discussed how TAPs facilitate that failsafe connection. In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. This includes exploring some of the common mistakes engineers should avoid when utilizing the more sophisticated Packet Broker features. Being aware of these issues can help network managers implement an efficient visibility architecture and avoid errors that could adversely affect network monitoring, performance and ultimately business operations.

Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them:

1. Don't Mistake a Packet Broker for a TAP

TAPs are relatively simple devices that are often confused with Packet Brokers. Both TAPs and Packer Brokers provide tool connectivity and have similar feature sets. However, TAPs provide failsafe network ports. These ports have copper relays or optical splitters that will keep network traffic flowing even if power is lost to the TAP. Packet Brokers generally do not have failsafe network ports. Therefore, it's important to make the initial network connections using TAPs and send the traffic through to the Packet Broker for management.

There are some combination TAP/Packet Brokers on the market that provide failsafe network connections and Packet Broker features. These combo (or Hybrid) units can save space and money depending on network size, complexity, and ports needed.

2. Buying New Monitoring Tools When New Links are Too Fast for Older Equipment

With ever-increasing bandwidth demands on networks, new links are often moving from copper connections (10Mbps to 100Mbps) to optical fiber (1Gbps), or from lower-speed fiber (1Gbps) to high-speed fiber (10Gbps – 100Gbps). Changing link media does not necessarily require replacing all legacy monitoring tools. Packet Brokers provide load balancing features that allow high-speed network links to evenly distribute the traffic among a number of lower-speed tools.

For example, an incoming network connection at 40Gbps can be connected to a Packet Broker and distributed through output/tool ports to five monitoring devices with a maximum processing capacity of 8Gbps each. This feature allows network managers to save CAPEX on monitoring tools while keeping pace with faster networking speeds.

3. Not Using a Packet Broker for In-Line Security Tools

Many security tools require in-line access to links, meaning that live traffic passes through the tool and back into the network. There are many TAPs that provide in-line access so the tool can have real-time control over live traffic. These TAPs protect live links through an active bypass function that keeps network traffic flowing even if the security tool is taken offline.

In complex networks, managers may be tempted to use multiple independent TAPs for in-line security tools, and Packet Brokers only to manage passive monitoring tools. Packet Brokers, however, can pass real-time traffic delivered through in-line TAPs. This allows the Packet Broker to manage both in-line security and passive monitoring tools through one central device, simplifying deployment of all connected tools.

4. Packet Slicing is Not Packet Manipulation

Packet Slicing is a Packet Broker feature that removes the payload from a packet before it arrives at the monitoring tool. This is done when only packet header information is required by the monitoring tool. Packet slicing can be an efficiency feature that allows the monitoring tool to work faster. It's also an important feature for privacy and legal compliance when monitoring equipment shouldn't have access to actual payload data. Accurate traffic monitoring, however, often requires visibility to the entire packet size in order to accurately capture and report on packet size and time through the network.

There are Packet Brokers that provide packet manipulation, which is similar to slicing, but more complex and more accurate for traffic monitoring and planning. This is done by replacing payload information with random 1's and 0's rather than simply removing the payload. Packet manipulation provides privacy compliance, accurate traffic management and a wider range of user-defined options for traffic analysis.

5. Not Planning for Scale

When designing a visibility architecture, it's critical that future needs be included in the plan. Plan A is purchasing more equipment and ports than initially required to be sure that future capacity for new links and tools is built into the initial plan. Plan B is to purchase only what is needed for today and worry about future needs and budget when the time comes.

However the best plan, Plan C, is to carefully evaluate all Packet Broker equipment options to build extensibility into the plan without breaking the budget. Some Packet Brokers offer scale-out options that allow the purchase of smaller initial units for immediate needs and provide extension units for future growth. This plan allows immediate budgetary savings and provides for growth by simple add-on rather than replacement of older equipment.

Monitoring tools were once used primarily for ad hoc diagnostics, but as networks advance and evolve, these solutions are now permanent additions that deliver vital information for today's modern digital businesses. Trends around BYOD, IoT, social media, and more, are increasing network traffic and malicious activity, making it harder to ensure performance and secure users. Understanding the role of a TAP and Packet Broker — and what mistakes to avoid when deploying them — will allow you to create a flexible visibility architecture that meets the needs of IT, while saving time and money.

Alastair Hartrup is CEO of Network Critical
Share this

The Latest

September 12, 2024

The OpenTelemetry End-User SIG surveyed more than 100 OpenTelemetry users to learn more about their observability journeys and what resources deliver the most value when establishing an observability practice ... Regardless of experience level, there's a clear need for more support and continued education ...

September 11, 2024

A silo is, by definition, an isolated component of an organization that doesn't interact with those around it in any meaningful way. This is the antithesis of collaboration, but its effects are even more insidious than the shutting down of effective conversation ...

September 10, 2024

New Relic's 2024 State of Observability for Industrials, Materials, and Manufacturing report outlines the adoption and business value of observability for the industrials, materials, and manufacturing industries ... Here are 8 key takeaways from the report ...

September 09, 2024

For mission-critical applications, it's often easy to justify an investment in a solution designed to ensure that the application is available no less than 99.99% of the time — easy because the cost to the organization of that app being offline would quickly surpass the cost of a high availability (HA) solution ... But not every application warrants the investment in an HA solution with redundant infrastructure spanning multiple data centers or cloud availability zones ...

September 05, 2024

The edge brings computing resources and data storage closer to end users, which explains the rapid boom in edge computing, but it also generates a huge amount of data ... 44% of organizations are investing in edge IT to create new customer experiences and improve engagement. To achieve those goals, edge services observability should be a centerpoint of that investment ...