In order to achieve continuous visibility and control of today's complex networks, organizations rely on specialized monitoring and security tools that are connected to live links. In my last blog, I discussed how TAPs facilitate that failsafe connection. In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. This includes exploring some of the common mistakes engineers should avoid when utilizing the more sophisticated Packet Broker features. Being aware of these issues can help network managers implement an efficient visibility architecture and avoid errors that could adversely affect network monitoring, performance and ultimately business operations.
Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them:
1. Don't Mistake a Packet Broker for a TAP
TAPs are relatively simple devices that are often confused with Packet Brokers. Both TAPs and Packer Brokers provide tool connectivity and have similar feature sets. However, TAPs provide failsafe network ports. These ports have copper relays or optical splitters that will keep network traffic flowing even if power is lost to the TAP. Packet Brokers generally do not have failsafe network ports. Therefore, it's important to make the initial network connections using TAPs and send the traffic through to the Packet Broker for management.
There are some combination TAP/Packet Brokers on the market that provide failsafe network connections and Packet Broker features. These combo (or Hybrid) units can save space and money depending on network size, complexity, and ports needed.
2. Buying New Monitoring Tools When New Links are Too Fast for Older Equipment
With ever-increasing bandwidth demands on networks, new links are often moving from copper connections (10Mbps to 100Mbps) to optical fiber (1Gbps), or from lower-speed fiber (1Gbps) to high-speed fiber (10Gbps – 100Gbps). Changing link media does not necessarily require replacing all legacy monitoring tools. Packet Brokers provide load balancing features that allow high-speed network links to evenly distribute the traffic among a number of lower-speed tools.
For example, an incoming network connection at 40Gbps can be connected to a Packet Broker and distributed through output/tool ports to five monitoring devices with a maximum processing capacity of 8Gbps each. This feature allows network managers to save CAPEX on monitoring tools while keeping pace with faster networking speeds.
3. Not Using a Packet Broker for In-Line Security Tools
Many security tools require in-line access to links, meaning that live traffic passes through the tool and back into the network. There are many TAPs that provide in-line access so the tool can have real-time control over live traffic. These TAPs protect live links through an active bypass function that keeps network traffic flowing even if the security tool is taken offline.
In complex networks, managers may be tempted to use multiple independent TAPs for in-line security tools, and Packet Brokers only to manage passive monitoring tools. Packet Brokers, however, can pass real-time traffic delivered through in-line TAPs. This allows the Packet Broker to manage both in-line security and passive monitoring tools through one central device, simplifying deployment of all connected tools.
4. Packet Slicing is Not Packet Manipulation
Packet Slicing is a Packet Broker feature that removes the payload from a packet before it arrives at the monitoring tool. This is done when only packet header information is required by the monitoring tool. Packet slicing can be an efficiency feature that allows the monitoring tool to work faster. It's also an important feature for privacy and legal compliance when monitoring equipment shouldn't have access to actual payload data. Accurate traffic monitoring, however, often requires visibility to the entire packet size in order to accurately capture and report on packet size and time through the network.
There are Packet Brokers that provide packet manipulation, which is similar to slicing, but more complex and more accurate for traffic monitoring and planning. This is done by replacing payload information with random 1's and 0's rather than simply removing the payload. Packet manipulation provides privacy compliance, accurate traffic management and a wider range of user-defined options for traffic analysis.
5. Not Planning for Scale
When designing a visibility architecture, it's critical that future needs be included in the plan. Plan A is purchasing more equipment and ports than initially required to be sure that future capacity for new links and tools is built into the initial plan. Plan B is to purchase only what is needed for today and worry about future needs and budget when the time comes.
However the best plan, Plan C, is to carefully evaluate all Packet Broker equipment options to build extensibility into the plan without breaking the budget. Some Packet Brokers offer scale-out options that allow the purchase of smaller initial units for immediate needs and provide extension units for future growth. This plan allows immediate budgetary savings and provides for growth by simple add-on rather than replacement of older equipment.
Monitoring tools were once used primarily for ad hoc diagnostics, but as networks advance and evolve, these solutions are now permanent additions that deliver vital information for today's modern digital businesses. Trends around BYOD, IoT, social media, and more, are increasing network traffic and malicious activity, making it harder to ensure performance and secure users. Understanding the role of a TAP and Packet Broker — and what mistakes to avoid when deploying them — will allow you to create a flexible visibility architecture that meets the needs of IT, while saving time and money.
Navigating observability pricing models can be compared to solving a perplexing puzzle which includes financial variables and contractual intricacies. Predicting all potential costs in advance becomes an elusive endeavor, exemplified by a recent eye-popping $65 million observability bill ...
Generative AI may be a great tool for the enterprise to help drive further innovation and meaningful work, but it also runs the risk of generating massive amounts of spam that will counteract its intended benefits. From increased AI spam bots to data maintenance due to large volumes of outputs, enterprise AI applications can create a cascade of issues that end up detracting from productivity gains ...
A long-running study of DevOps practices ... suggests that any historical gains in MTTR reduction have now plateaued. For years now, the time it takes to restore services has stayed about the same: less than a day for high performers but up to a week for middle-tier teams and up to a month for laggards. The fact that progress is flat despite big investments in people, tools and automation is a cause for concern ...
Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...