Network Forensics at 40G and 100G Speeds
February 23, 2016

Mandana Javaheri
Savvius

Share this

The 40G and 100G market will generate tens of billions of dollars in revenue in the next few years according to a recent Infonetics market forecast. Growth in traffic, which some analysts estimate will reach 50 to 60 percent annually, enables new opportunities but also puts enormous pressure on networks and creates new challenges.

Network forensics is one of these new challenges. Although network forensics is most commonly associated with investigating security incidents and breaches, it is also very valuable for providing visibility into network activities, troubleshooting issues quickly and diagnosing common network problems such as connectivity, unexpected change in utilization, or poor VoIP call quality.

Here are some of the ways you can prepare for successful network forensics as network speeds increase.

Know your Network

To identify anomalies, first you need to define or benchmark what is "normal" for your network. Your network performance solution is your best friend here. Baselining key business applications as well as measuring important network-based metrics such as packet size distribution, protocol and node usage will build an accurate model to know the normal behavior so you have something to compare to in case of problems.

Prepare for Everything

It is not just about having the right network forensics solution; you need the right infrastructure for your new, fast network as well. From your switches to your routers to your network packet brokers to your filtering criteria to your monitoring and forensics tools, everything has to be fast-speed compatible.

And most importantly you need to know your network and ask yourself the right questions:

What is your strategy?

Does it make sense to load-balance your traffic across multiple network forensics devices to get the full visibility?

Does it make sense to filter out the traffic you don't need?

What is your use case?

How do you usually find out there is an issue?

Is it by constantly monitoring the network or by receiving trouble tickets about performance?

Every network has its own specific needs, so make sure you know what those needs are and pick a network forensics partner that will help you meet them.

Smart Storage

One of the important components of making sure you have the network level data available to you when needed is defining the storage requirements. The faster the network becomes, the more storage is required to store what you need.

A fully utilized 1G network will generate 11TB of data per day. To control storage costs, you will need to get smarter about what is stored. This is only possible by knowing the network and your specific use cases. Techniques like filtering, packet slicing and load-balancing will help you use your storage more efficiently, while extended storage, SAN, and cloud-based technologies are also available if needed.

Depending on your network traffic, forensics and storage requirements, you should pick the amount and type of storage you require today and make sure it can scale to meet your needs in the future.

Intelligent Forensics

Searching through large amounts of packet data to find that essential little trace can be a frustrating process. So pick your search criteria and the type of analytics you need to run on your traffic wisely. Use your knowledge about the network baseline to define the forensics criteria. Make your search as focused as possible using filters. Define the time range, the application, the server or client which is experiencing the issue and drill down to as much detail as needed for troubleshooting. For example, if your problem is not VoIP or wireless related, don't use hardware resources to analyze those.

By knowing your network, using the right techniques and planning ahead, you can turn 40G and 100G network challenges into new opportunities.

Mandana Javaheri is CTO of Savvius.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...