Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability.
However, like most networking equipment it's critical that installation and configuration are done properly. TAPs can often be deceivingly simple to deploy, which can result in some common mistakes that can have a big impact on the network. Here are 5 tips to properly deploying your network TAP technology:
1. Filter Assignments
Many intelligent TAPs have traffic filtering features that allow certain traffic to be eliminated from the traffic stream assigned to a given tool. Most TAPs use hierarchical filtering which means that filter rules follow a linear descending progression. For example, if http traffic is eliminated in rule #1, it can't later be included in rule #2 or beyond. This makes it imperative that meticulous advanced planning be done to understand which tools need which data. Then the planner must prioritize the tools in the correct order to get the right information to the right tool.
In larger networks, this can be a very complex task, sending filtered streams of information to certain upstream tools without jeopardizing the totality of traffic required by other downstream tools. If certain data is eliminated prior to arriving at the tool that's expecting it, the analysis will be flawed, which may cause alarms or worse, removal of a link from service until the filter rules are corrected.
Fortunately, there are a few TAPs that use innovative independent filter rules and do the math in the background. With independent filters, downstream tools are not dependent on upstream rules. This increases information accuracy and dramatically speeds deployment. Building flexible, independent rules and applying them independently to individual tools cuts planning time from hours to minutes, and eliminates potential service affecting configuration errors.
2. Port Mapping Errors
Many TAPs can have 16 or more ports. So, even when network links and tools are physically plugged in to the correct ports, internal maps of incoming traffic, outgoing traffic and through traffic, must be properly configured. Many TAPs use a programming syntax called Command Line Interface (CLI) to configure the unit. Each port must be directed to act as input for network traffic or output to tools using a set of specialized commands. Errors occur when network ports are internally mapped to incorrect tools sending the wrong information and therefore providing erroneous results.
Some TAPs, however, use an advanced Graphical User Interface (GUI) making the configuration task simpler and faster. By taking the programming language out of configuration, port mapping can be as simple as dragging a cursor and clicking on the correct ports. GUI interfaces are simple to use, save time and, often, provide mis-configuration alarms when configuration rules are broken. Using a TAP with an advanced GUI can improve accuracy and eliminate configuration mapping errors.
3. Connecting Network Links to Tool Ports
TAP ports are often designated for specific functions and designed as such. Ports that are designed to connect to network links provide fail-safe technology. If power is lost to the TAP, fail-safe will keep the live network link active and passing data. This network protection technology is designed onto network port cards, including fast relays for copper links and optical splitters for fiber links. However, ports that are designed specifically to connect tools and not interface with live links do not have fail-safe relays or splitters. If those tool ports are used as network access ports and power is lost to the unit, the network link will fail.
It is possible to avoid this mistake by looking for TAPs that provide the flexibility to use any port for either network or tool access. These TAPs include fail-safe relays on all ports, so it doesn't matter which port is used for network or tool access.
4. Mismatched Optical Fiber Connections
Multi-mode and single mode optical fibers are different sizes and have different transmission characteristics. In designing optical networks, it's important not to mix these two media. Single mode fiber is generally used for higher bandwidth (>10Gbps) and longer distances. Multi-mode for shorter distances and lower bandwidth (
There are Intelligent TAPs that can provide media conversion if optical link and tool interfaces are not compatible. For example, a long distance, single mode link may be connected to a TAP port and mapped to a monitoring tool that has a multi-mode interface. The media conversion will happen in the TAP as long as speeds are compatible.
5. Over Subscribing Ports
TAP ports are designed to pass traffic within a specific bandwidth range. There are copper and optical fiber ports designed for 1Gbps speeds and below. Other ports use Small Form-factor Plug-in (SFP) cages that allow for a variety of single mode and multi-mode fiber interfaces at higher speeds of 10Gbps and higher. Monitoring tools use similar ports to connect to links through TAPs. When connecting tools to links through TAPs it's important to understand the processing capacity of the tool and the speed of the link to be sure they're compatible. Over subscribing a TAP port or a tool port will cause inaccurate analysis results because packets will be randomly dropped to meet port limitations.
Paying attention to these five potential trouble spots will contribute to a fast and reliable TAP deployment. Shopping for intelligent taps that provide configuration mapping, filter alarms, media conversion and simple GUIs will make the job faster and reduce potential network issues down the road.
The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...
Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...
There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...
If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...