Network Forensics in a World of Faster Networks
April 18, 2014

Jay Botelho
LiveAction

Share this

Enterprises are relying more on their networks than ever before, but the volume of traffic on faster, higher bandwidth networks is outstripping the data collection and analysis capabilities of traditional network analysis tools. Yesterday's network analyzers – that were designed originally for 1G or slower networks – can't handle the increased amount of traffic, resulting in dropped packets and erroneous reports.

Earlier this year, WildPackets surveyed more than 250 network engineers and IT professionals to better understand how network forensics solutions were being used within the enterprise. Respondents hailed from organizations of all sizes and industries – with the plurality (30%) coming from the technology industry. Furthermore, 50% of all respondents identified themselves as network engineers, with 28% at the director-level or above.

According to the survey, 72% of organizations have increased their network utilization over the past year, resulting in slower problem identification and resolution (38%), less real-time visibility (25%) and more dropped packets leading to inaccurate results (15%).

What we found most interesting was that even though 66% of the survey respondents supported 10G or faster network speeds, only 40% of respondents answered affirmatively to the question "Does your organization currently have a network forensics solution in place?"

So what's the big deal? Not only do faster network speeds make securing and troubleshooting networks difficult, but also traditional network analysis solutions simply cannot keep up with the massive volumes of data being transported.

Organizations need better visibility of the data that are traversing their networks, and deploying a network forensics solution is the only way to gain 24/7 visibility into business operations while also analyzing network performance and IT risks with 100% reliability. Current solutions rely on sampled traffic and high-level statistics, which lack the details and hard evidence that IT engineers need to quickly troubleshoot problems and characterize security attacks.

With faster networks leading to a significant increase in the volume of data being transported - 74% of survey respondents have seen an increase in the volume of data traversing their networks over the last year - network forensics has become an essential IT capability to be deployed at every network location. The recent increase in security breaches is a perfect example of how the continued adoption of network forensics within the security operations center of organizations can be used to pinpoint breaches and infiltrations.

In the past, folks used to think that network forensics was synonymous with security incident investigations. But the results of our survey show that organizations are using these solutions for a variety of reasons. While 25% of respondents said they deploy network forensics for troubleshooting security breaches, almost an equal number (24%) cited verifying and troubleshooting transactions as the key function. 17% percent said analyzing network performance on 10G and faster networks was their main use for forensics, another 17% reported using the solution for verifying VoIP or video traffic problems, and 14% for validating compliance.

In addition, organizations said the biggest benefits of network forensics include: improved overall network performance (40%), reduced time to resolution (30%), and reduced operating costs (21%).

Enterprises recognize that network forensics provides them with the necessary visibility into their business operations, and with increased 40G and 100G network deployments forecast in the next year, network forensics will be a critical tool to gain visibility into these high-performing networks and troubleshoot issues when they arise. Based on the many uses of network forensics, it is expected that the gap between those deploying high speed networks and those deploying network forensics will shrink over the coming years.

Jay Botelho is Director of Product Management at WildPackets.

Jay Botelho is Director of Engineering at LiveAction
Share this

The Latest

November 14, 2019

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points, followed by a few important lessons which I have learned over the years ...

November 13, 2019

Research conducted by ServiceNow shows that Gen Zs, now entering the workforce, recognize the promise of technology to improve work experiences, are eager to learn from other generations, and believe they can help older generations be more open‑minded ...

November 12, 2019

We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...

November 07, 2019

Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...

November 06, 2019

A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...

November 05, 2019

Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...

November 04, 2019

IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...

October 31, 2019

APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...

October 30, 2019

Enterprises depending exclusively on legacy monitoring tools are falling behind in business agility and operational efficiency, according to a new study, Prevalence of Legacy Tools Paralyzes Enterprises' Ability to Innovate conducted by Forrester Consulting ...

October 29, 2019

Hyperconverged infrastructure is sometimes referred to as a "data center in a box" because, after the initial cabling and minimal networking configuration, it has all of the features and functionality of the traditional 3-2-1 virtualization architecture (except that single point of failure) ...