100G is Increasingly Popular, and It's Creating a Host of Management Challenges
November 02, 2020

Nadeem Zahid
cPacket Networks

Share this

Name virtually any technology trend — digital transformation, cloud-first operations, datacenter consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructure toward the edge, and increasing adoption of 100Gbps in enterprise core, datacenter and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring failover topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in spine-leaf architecture to handle increases in east-west connections.

Beyond a hunger for bandwidth, 100G is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100G-enabled components, and the derivative ability to easily break 100G into 10/25G line rates. In light of these trends, analyst firm Dell'Oro expects 100G adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt ever-faster networks. However, the same thing that makes 100G desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10G, every packet is transmitted in 67 nanoseconds; at 100G that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100G, traditional management and monitoring infrastructure can't keep up.

The line-rate requirement varies based on where infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100G line speeds to packet brokers and tools. Packet brokers must handle that 100G traffic simultaneously on multiple ports, and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100G bursts in capture-to-disk process. And any analysis layer must ingest information at 100G speeds to allow correlation, analysis and visualization.

Complicating matters are various "smart" features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load degrades performance further.

For any infrastructure not designed with 100G in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges and more.

Lossless monitoring requires that every part of the visibility stack is designed around 100G line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical chokepoint. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100G aggregation layer to aggregate TAPs and tools, and a high-performance 100G core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100G line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100G line rates and eliminate these bottlenecks.

The overarching point is that desire for 100G performance cannot override the need for 100G visibility, or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100G line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

February 06, 2023

This year 2023, at a macro level we are moving from an inflation economy to a recession and uncertain economy and the general theme is certainly going to be "Doing More with Less" and "Customer Experience is the King." Let us examine what trends and technologies will play a lending hand in these circumstances ...

February 02, 2023

As organizations continue to adapt to a post-pandemic surge in cloud-based productivity, the 2023 State of the Network report from Viavi Solutions details how end-user awareness remains critical and explores the benefits — and challenges — of cloud and off-premises network modernization initiatives ...

February 01, 2023

In the network engineering world, many teams have yet to realize the immense benefit real-time collaboration tools can bring to a successful automation strategy. By integrating a collaboration platform into a network automation strategy — and taking advantage of being able to share responses, files, videos and even links to applications and device statuses — network teams can leverage these tools to manage, monitor and update their networks in real time, and improve the ways in which they manage their networks ...

January 31, 2023

A recent study revealed only an alarming 5% of IT decision makers who report having complete visibility into employee adoption and usage of company-issued applications, demonstrating they are often unknowingly careless when it comes to software investments that can ultimately be costly in terms of time and resources ...

January 30, 2023

Everyone has visibility into their multi-cloud networking environment, but only some are happy with what they see. Unfortunately, this continues a trend. According to EMA's latest research, most network teams have some end-to-end visibility across their multi-cloud networks. Still, only 23.6% are fully satisfied with their multi-cloud network monitoring and troubleshooting capabilities ...

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...