100G is Increasingly Popular, and It's Creating a Host of Management Challenges
November 02, 2020

Nadeem Zahid
cPacket Networks

Share this

Name virtually any technology trend — digital transformation, cloud-first operations, datacenter consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructure toward the edge, and increasing adoption of 100Gbps in enterprise core, datacenter and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring failover topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in spine-leaf architecture to handle increases in east-west connections.

Beyond a hunger for bandwidth, 100G is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100G-enabled components, and the derivative ability to easily break 100G into 10/25G line rates. In light of these trends, analyst firm Dell'Oro expects 100G adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt ever-faster networks. However, the same thing that makes 100G desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10G, every packet is transmitted in 67 nanoseconds; at 100G that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100G, traditional management and monitoring infrastructure can't keep up.

The line-rate requirement varies based on where infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100G line speeds to packet brokers and tools. Packet brokers must handle that 100G traffic simultaneously on multiple ports, and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100G bursts in capture-to-disk process. And any analysis layer must ingest information at 100G speeds to allow correlation, analysis and visualization.

Complicating matters are various "smart" features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load degrades performance further.

For any infrastructure not designed with 100G in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges and more.

Lossless monitoring requires that every part of the visibility stack is designed around 100G line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical chokepoint. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100G aggregation layer to aggregate TAPs and tools, and a high-performance 100G core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100G line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100G line rates and eliminate these bottlenecks.

The overarching point is that desire for 100G performance cannot override the need for 100G visibility, or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100G line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

November 24, 2020

Shoppers are heading into Black Friday with high expectations for digital experiences and are only willing to experience a service interruption of five minutes or less to get the best deal, according to the 2020 Black Friday and Cyber Monday eCommerce Trends Study, from xMatters ...

November 23, 2020

Digital Experience Monitoring (DEM) has become significant to businesses more than ever. Global events like Covid continue to disrupt best practices within IT to support business. The pandemic has already forced millions of employees to WFH and adopt a hybrid workspace. Network connectivity and cloud application issues in these environments will continue to impact productivity and slow progress. Even so, transparent migration and deployment of on-premise workloads across multi-cloud providers, by their very nature are complex ...

November 20, 2020

APMdigest posed the following question to the IT Operations community: How should ITOps adapt to the new normal? In response, industry experts offered their best recommendations for how ITOps can adapt to this new remote work environment. Part 5, the final installment in the series, covers open source and emerging technologies ...

November 19, 2020

APMdigest posed the following question to the IT Operations community: How should ITOps adapt to the new normal? In response, industry experts offered their best recommendations for how ITOps can adapt to this new remote work environment. Part 4 covers monitoring and visibility ...

November 18, 2020

APMdigest posed the following question to the IT Operations community: How should ITOps adapt to the new normal? In response, industry experts offered their best recommendations for how ITOps can adapt to this new remote work environment. Part 3 covers automation ...

November 17, 2020

APMdigest posed the following question to the IT Operations community: How should ITOps adapt to the new normal? In response, industry experts offered their best recommendations for how ITOps can adapt to this new remote work environment. Part 2 covers communication and collaboration ...

November 16, 2020

The "New Normal" in IT — the fact that most IT Operations personnel work from home (WFH) today — is here to stay. What started out as a reaction to the COVID-19 pandemic is now a way of life. Many experts agree that IT teams will not be going back to the office any time soon, even if the public health concerns are abated. How should ITOPs adapt to the new normal? That is the question APMdigest posed to the IT industry. ITOps experts — from analysts and consultants to the top vendors — offer their best recommendations for how ITOps can react to this new environment ...

November 12, 2020

The pandemic effectively "shocked" enterprises into pushing the gas on tech initiatives that, on the one hand, support a more flexible, decentralized workforce, but that were by-and-large already on the roadmap, regardless of whether businesses had been planning to support widespread work-from-home or not ...

November 10, 2020

Maintaining call quality with Microsoft Teams is a process, not a onetime event. Network engineers and Microsoft Teams application owners need to be vigilant in preserving optimal call quality to ensure audio, video, and screen-sharing always remain satisfactory for end-users. In this blog, we cover how the Microsoft Teams Call Quality Dashboard (CQD) combined with the audio/video synthetic transaction monitoring improves this maintenance process ...

November 09, 2020

For IT teams, catching errors in applications before they become detrimental to a project is critical. Wouldn't it be nice if there was someone standing over your shoulder, letting you know exactly when, where, and what the issue is so you can correct it immediately? Luckily, there are both application performance management (APM) and application stability management (ASM) solutions available that can do this for you, flagging errors in both the deployment and development stages of applications, before they can create larger issues down the line ...