In order to achieve continuous visibility and control of today's complex networks, organizations rely on specialized monitoring and security tools that are connected to live links. In my last blog, I discussed how TAPs facilitate that failsafe connection. In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. This includes exploring some of the common mistakes engineers should avoid when utilizing the more sophisticated Packet Broker features. Being aware of these issues can help network managers implement an efficient visibility architecture and avoid errors that could adversely affect network monitoring, performance and ultimately business operations.
Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them:
1. Don't Mistake a Packet Broker for a TAP
TAPs are relatively simple devices that are often confused with Packet Brokers. Both TAPs and Packer Brokers provide tool connectivity and have similar feature sets. However, TAPs provide failsafe network ports. These ports have copper relays or optical splitters that will keep network traffic flowing even if power is lost to the TAP. Packet Brokers generally do not have failsafe network ports. Therefore, it's important to make the initial network connections using TAPs and send the traffic through to the Packet Broker for management.
There are some combination TAP/Packet Brokers on the market that provide failsafe network connections and Packet Broker features. These combo (or Hybrid) units can save space and money depending on network size, complexity, and ports needed.
2. Buying New Monitoring Tools When New Links are Too Fast for Older Equipment
With ever-increasing bandwidth demands on networks, new links are often moving from copper connections (10Mbps to 100Mbps) to optical fiber (1Gbps), or from lower-speed fiber (1Gbps) to high-speed fiber (10Gbps – 100Gbps). Changing link media does not necessarily require replacing all legacy monitoring tools. Packet Brokers provide load balancing features that allow high-speed network links to evenly distribute the traffic among a number of lower-speed tools.
For example, an incoming network connection at 40Gbps can be connected to a Packet Broker and distributed through output/tool ports to five monitoring devices with a maximum processing capacity of 8Gbps each. This feature allows network managers to save CAPEX on monitoring tools while keeping pace with faster networking speeds.
3. Not Using a Packet Broker for In-Line Security Tools
Many security tools require in-line access to links, meaning that live traffic passes through the tool and back into the network. There are many TAPs that provide in-line access so the tool can have real-time control over live traffic. These TAPs protect live links through an active bypass function that keeps network traffic flowing even if the security tool is taken offline.
In complex networks, managers may be tempted to use multiple independent TAPs for in-line security tools, and Packet Brokers only to manage passive monitoring tools. Packet Brokers, however, can pass real-time traffic delivered through in-line TAPs. This allows the Packet Broker to manage both in-line security and passive monitoring tools through one central device, simplifying deployment of all connected tools.
4. Packet Slicing is Not Packet Manipulation
Packet Slicing is a Packet Broker feature that removes the payload from a packet before it arrives at the monitoring tool. This is done when only packet header information is required by the monitoring tool. Packet slicing can be an efficiency feature that allows the monitoring tool to work faster. It's also an important feature for privacy and legal compliance when monitoring equipment shouldn't have access to actual payload data. Accurate traffic monitoring, however, often requires visibility to the entire packet size in order to accurately capture and report on packet size and time through the network.
There are Packet Brokers that provide packet manipulation, which is similar to slicing, but more complex and more accurate for traffic monitoring and planning. This is done by replacing payload information with random 1's and 0's rather than simply removing the payload. Packet manipulation provides privacy compliance, accurate traffic management and a wider range of user-defined options for traffic analysis.
5. Not Planning for Scale
When designing a visibility architecture, it's critical that future needs be included in the plan. Plan A is purchasing more equipment and ports than initially required to be sure that future capacity for new links and tools is built into the initial plan. Plan B is to purchase only what is needed for today and worry about future needs and budget when the time comes.
However the best plan, Plan C, is to carefully evaluate all Packet Broker equipment options to build extensibility into the plan without breaking the budget. Some Packet Brokers offer scale-out options that allow the purchase of smaller initial units for immediate needs and provide extension units for future growth. This plan allows immediate budgetary savings and provides for growth by simple add-on rather than replacement of older equipment.
Monitoring tools were once used primarily for ad hoc diagnostics, but as networks advance and evolve, these solutions are now permanent additions that deliver vital information for today's modern digital businesses. Trends around BYOD, IoT, social media, and more, are increasing network traffic and malicious activity, making it harder to ensure performance and secure users. Understanding the role of a TAP and Packet Broker — and what mistakes to avoid when deploying them — will allow you to create a flexible visibility architecture that meets the needs of IT, while saving time and money.
Nearly 3,700 people told GitLab about their DevOps journeys. Respondents shared that their roles are changing dramatically, no matter where they sit in the organization. The lines surrounding the traditional definitions of dev, sec, ops and test have blurred, and as we enter the second half of 2020, it is perhaps more important than ever for companies to understand how these roles are evolving ...
As cloud computing continues to grow, tech pros say they are increasingly prioritizing areas like hybrid infrastructure management, application performance management (APM), and security management to optimize delivery for the organizations they serve, according to SolarWinds IT Trends Report 2020: The Universal Language of IT ...
Businesses see digital experience as a growing priority and a key to their success, with execution requiring a more integrated approach across development, IT and business users, according to Digital Experiences: Where the Industry Stands ...
Fully 90% of those who use observability tooling say those tools are important to their team's software development success, including 39% who say observability tools are very important ...
As our production application systems continuously increase in complexity, the challenges of understanding, debugging, and improving them keep growing by orders of magnitude. The practice of Observability addresses both the social and the technological challenges of wrangling complexity and working toward achieving production excellence. New research shows how observable systems and practices are changing the APM landscape ...