100G is Increasingly Popular, and It's Creating a Host of Management Challenges
November 02, 2020

Nadeem Zahid
cPacket Networks

Share this

Name virtually any technology trend — digital transformation, cloud-first operations, datacenter consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructure toward the edge, and increasing adoption of 100Gbps in enterprise core, datacenter and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring failover topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in spine-leaf architecture to handle increases in east-west connections.

Beyond a hunger for bandwidth, 100G is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100G-enabled components, and the derivative ability to easily break 100G into 10/25G line rates. In light of these trends, analyst firm Dell'Oro expects 100G adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt ever-faster networks. However, the same thing that makes 100G desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10G, every packet is transmitted in 67 nanoseconds; at 100G that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100G, traditional management and monitoring infrastructure can't keep up.

The line-rate requirement varies based on where infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100G line speeds to packet brokers and tools. Packet brokers must handle that 100G traffic simultaneously on multiple ports, and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100G bursts in capture-to-disk process. And any analysis layer must ingest information at 100G speeds to allow correlation, analysis and visualization.

Complicating matters are various "smart" features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load degrades performance further.

For any infrastructure not designed with 100G in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges and more.

Lossless monitoring requires that every part of the visibility stack is designed around 100G line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical chokepoint. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100G aggregation layer to aggregate TAPs and tools, and a high-performance 100G core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100G line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100G line rates and eliminate these bottlenecks.

The overarching point is that desire for 100G performance cannot override the need for 100G visibility, or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100G line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...