Why You Should Use Packet Analysis to Complement NetFlow When Monitoring Network Performance
March 20, 2018

Chris Bloom
Savvius

Share this

Most organizations understand that centralized network monitoring is vital to maintaining the health of critical infrastructure and applications. And while solutions using NetFlow undoubtedly help gain perspective into capacity planning, trend analysis, and utilization, they lack the important precision of packet-based analytics tools that provide root-cause analysis for application performance, latency, TCP/IP or VoIP problems. Both monitoring technologies have their advantages and ideal use cases, so let's see how enterprises can maximize existing infrastructure and equipment investments by using packet analytics to complement NetFlow.

Evolution of Network Monitoring Technologies

First up is NetFlow. This is a well-known, well-established standard that provides conversational information about network status. Compared with even older protocols like SNMP, NetFlow offers greater precision, delivering data at intervals of around 1 second, depending on the equipment being monitored. NetFlow also has the advantage when it comes to providing a good global view of a network. This can be extremely helpful when monitoring the network's general health.

Next, let's turn to packed-based analytics. This breed of network monitoring solution delivers a much higher troubleshooting value to NetOps teams thanks to its data granularity and access to raw data. Since it is packet based, it is very precise, and interval times are often as short as a few nanoseconds. On top of that, the data is completely based on the original payload, so it isn't abbreviated or compiled.


What's the Difference Between Flow- and Packet-Based Analytics?

The key benefit of packet-based solutions is that they can provide much more information that can be used in network diagnostics. If there is a problem, the packet-based approach is completely passive, so it doesn't burden the network or interfere with existing operations or services. As you can imagine, this is very important, especially because nobody wants to exacerbate existing problems by piling on more network traffic.

NetFlow data, which typically comes from Layer 3 devices like routers and firewalls, provides good information about traffic volume between devices. But if you need to use multiple ports, NetFlow is at a disadvantage. This is where packet-based analytics come into their own. Packet analysis allows users to drill down and discover information about how the network is behaving, not just whether it's operating well. All of the packets and all of the information is there in the packets, so it's also going to be 100 percent accurate. And the final advantage is that packet-based analysis can be implemented with very little impact on the network, while supporting monitoring and troubleshooting simultaneously.

When it comes to troubleshooting, Flow-based technology is useful only up to Layer 3 (and occasionally Layer 4) so at least we can see where data traffic is being generated. When the NetOps team starts to get trouble tickets about a slow network or a CRM that's unable to save any records (for example), they need to start looking at the root cause. In this scenario, NetFlow would reveal that traffic is going between the client and the server, and that it's running on a specific port. It could also tell you what volume of traffic is produced by each of the clients. In other words, you could verify simple problems like whether the server is up and running and whether the port is operational.

The key here is that NetFlow alone isn't adequate in a modern network setting. It struggles to identify any activity associated with content delivery networks and applications that use multiple TCP or UDP ports. It also has no visibility into the payload or its contents. You may be able to see that a server has an issue, but that's far from definitive.

Take a look at the screenshot below, taken from a real use case. In this situation, a client is unable to get a response from a server, and its task is canceled. By investigating the reason for this problem, the packet-based solution quickly identifies the issue and shows the cause. In the text box at the bottom we see a message: “Your server command (process id 169) was deadlocked with another process and has been chosen as deadlock victim. Re-run your command.” This reference code tells us that the error was generated when two tasks concurrently requested access to the same resource. Armed with this information, the network team quickly determines that the problem is with the application, not the network, and provides the application team with actionable data to directly address the issue.


Packet-based analysis has been designed specifically to reveal the “how” of the network. Rather than being about just the volume of traffic, these solutions expose vital details about performance and application response. Users can compare network latency with application latency. They can see the efficiency of TCP communications on their network. They can evaluate the performance of VoIP and video over the network and determine if these real-time protocols are prioritized correctly. None of this can be achieved with NetFlow or its derivatives.

To help make my point, here are five common questions that can be solved when packet-based analysis is used in tandem with Netflow:

■ Is it the network or the application?

■ Is the issue isolated to a single user, a single server, or the network overall?

■ Are critical applications using network resources efficiently?

■ Is my network correctly configured for unified communications, and are unified communications co-existing with other network transactions?

■ Are critical functions, for example user authentication, failing due to protocol issues?

NetFlow certainly has its place in the network monitoring hierarchy, but its limitations make it less than ideal in many situations. For most network professionals, having access to packet data is a no-brainer and significantly accelerates mean-time-to-resolution (MTTR). The challenge is in learning how to balance the way we use these tools in our approach to network monitoring.

Chris Bloom is Senior Manager of Technical Alliances at Savvius
Share this

The Latest

August 22, 2019

Recent EMA research explored the state of ESM. One of the many conclusions is that ESM is mainstream. Fully 87% have some level of ESM deployment. Not surprisingly, there is a significant divide between mature and relatively new deployments in terms of benefits derived, use of AI and automation, adoption levels, and number of non-IT areas being served from established ITSM implementations ...

August 21, 2019

For the first time, a majority of companies are putting mission critical apps in the cloud, according to the latest report by Cloud Foundry Foundation ...

August 20, 2019

The cloud continues to transform IT in every industry. But in order to migrate to the cloud, embrace these new technologies and truly evolve their business, organizations need an underlying network that can support digital transformation ...

August 19, 2019

One common infrastructure challenge arises with virtual private networks (VPNs). VPNs have long been relied upon to deliver the network connectivity and security enterprises required at a price they could afford. Organizations still routinely turn to them to provide internal and trusted third-parties with "secure" remote access to isolated networks. However, with the rise in mobile, IoT, multi- and hybrid-cloud, as well as edge computing, traditional enterprise perimeters are extending and becoming blurred ...

August 15, 2019

The configuration management database (CMDB), along with its more federated companion, the configuration management system (CMS), has been bathed in a deluge of negative opinions from all fronts — industry experts, vendors, and IT professionals. But from what recent EMA research on analytics, ITSM performance and other areas is indicating, those negative views seem to be missing out on a real undercurrent of truth — that CMDB/CMS alignments, whatever their defects, strongly skew to success in terms of overall IT progressiveness and effectiveness ...

August 14, 2019

The on-demand economy has transformed the way we move around, eat, learn, travel and connect at a massive scale. However, with disruption and big aspirations comes big, complex challenges. To take these challenges head-on, on-demand economy companies are finding new ways to deliver their services and products to an audience with ever-increasing expectations, and that's what we'll look at in this blog ...

August 13, 2019

To thrive in today's highly competitive digital business landscape, organizations must harness their "digital DNA." In other words, they need to connect all of their systems and databases — including various business applications, devices, big data and any instances of IoT and hybrid cloud environments — so they're accessible and actionable. By integrating all existing components and new technologies, organizations can gain a comprehensive, trusted view of their business functions, thereby enabling more agile deployment processes and ensuring scalable growth and relevance over the long-term ...

August 12, 2019

Advancements in technology innovation are happening so quickly, the decision of where and when to transform can be a moving target for businesses. When done well, digital transformation improves the customer experience while optimizing operational efficiency. To get there, enterprises must encourage experimentation to overcome organizational obstacles. In other words ...

August 08, 2019

IoT adoption is growing rapidly, and respondents believe 30% of their company’s revenue two years from now will be due to IoT, according to the new IoT Signals report from Microsoft Corp ...

August 07, 2019

It's been all over the news the last few months. After two fatal crashes, Boeing was forced to ground its 737. The doomed model is now undergoing extensive testing to get it back into service and production. Large organizations often tell stakeholders that even though all software goes through extensive testing, this type of thing “just happens.” But that is exactly the problem. While the human component of application development and testing won't go away, it can be eased and supplemented by far more efficient and automated methods to proactively determine software health and identify flaws ...