Skip to main content

100G is Increasingly Popular, and It's Creating a Host of Management Challenges

Nadeem Zahid
cPacket Networks

Name virtually any technology trend — digital transformation, cloud-first operations, datacenter consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructure toward the edge, and increasing adoption of 100Gbps in enterprise core, datacenter and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring failover topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in spine-leaf architecture to handle increases in east-west connections.

Beyond a hunger for bandwidth, 100G is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100G-enabled components, and the derivative ability to easily break 100G into 10/25G line rates. In light of these trends, analyst firm Dell'Oro expects 100G adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt ever-faster networks. However, the same thing that makes 100G desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10G, every packet is transmitted in 67 nanoseconds; at 100G that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100G, traditional management and monitoring infrastructure can't keep up.

The line-rate requirement varies based on where infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100G line speeds to packet brokers and tools. Packet brokers must handle that 100G traffic simultaneously on multiple ports, and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100G bursts in capture-to-disk process. And any analysis layer must ingest information at 100G speeds to allow correlation, analysis and visualization.

Complicating matters are various "smart" features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load degrades performance further.

For any infrastructure not designed with 100G in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges and more.

Lossless monitoring requires that every part of the visibility stack is designed around 100G line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical chokepoint. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100G aggregation layer to aggregate TAPs and tools, and a high-performance 100G core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100G line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100G line rates and eliminate these bottlenecks.

The overarching point is that desire for 100G performance cannot override the need for 100G visibility, or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100G line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks

The Latest

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

100G is Increasingly Popular, and It's Creating a Host of Management Challenges

Nadeem Zahid
cPacket Networks

Name virtually any technology trend — digital transformation, cloud-first operations, datacenter consolidation, mobility, streaming data, AI/ML, the application explosion, etc. — they all have one thing in common: an insatiable need for higher bandwidth (and often, low latency). The result is a steady push to move 10Gbps and 25Gbps network infrastructure toward the edge, and increasing adoption of 100Gbps in enterprise core, datacenter and service provider networks.

Initial deployments focused on backbone interconnects (historically a dual-ring failover topology; more recently mesh connectivity), primarily driven by north-south traffic. Data center adoption has followed, generally in spine-leaf architecture to handle increases in east-west connections.

Beyond a hunger for bandwidth, 100G is having a moment for several reasons: a commodity-derived drop in cost, increasing availability of 100G-enabled components, and the derivative ability to easily break 100G into 10/25G line rates. In light of these trends, analyst firm Dell'Oro expects 100G adoption to hit its stride this year and remain strong over the next five years.

Nobody in their right mind disputes the notion that enterprises and service providers will continue to adopt ever-faster networks. However, the same thing that makes 100G desirable — speed — conspires to create a host of challenges when trying to manage and monitor the infrastructure. The simple truth is that the faster the network, the more quickly things can go wrong. That makes monitoring for things like regulatory compliance, load balancing, incident response/forensics, capacity planning, etc., more important than ever.

At 10G, every packet is transmitted in 67 nanoseconds; at 100G that increases tenfold, with packets flying by at 6.7 nanoseconds. And therein lies the problem: when it comes to 100G, traditional management and monitoring infrastructure can't keep up.

The line-rate requirement varies based on where infrastructure sits in the monitoring stack. Network TAPs must be capable of mirroring data at 100G line speeds to packet brokers and tools. Packet brokers must handle that 100G traffic simultaneously on multiple ports, and process and forward each packet at line rate to the tool rail. Capture devices need to be able to achieve 100G bursts in capture-to-disk process. And any analysis layer must ingest information at 100G speeds to allow correlation, analysis and visualization.

Complicating matters are various "smart" features, each of which demand additional processing resources. As an example, packet brokers might include filtering, slicing and deduplication capabilities. If the system is already struggling with the line rate, any increased processing load degrades performance further.

For any infrastructure not designed with 100G in mind, the failure mode is inevitably the same: lost or dropped packets. That, in turn, results in network blind spots. When visibility is the goal, blind spots are — at the risk of oversimplification — bad. The impact can be incorrect calculations, slower time-to-resolution or incident response, longer malware dwell time, greater application performance fluctuation, compliance or SLA challenges and more.

Lossless monitoring requires that every part of the visibility stack is designed around 100G line speeds. Packet brokers in particular, given their central role in visibility infrastructure, are a critical chokepoint. Where possible, a two-tier monitoring architecture is recommended with a high-density 10/25/100G aggregation layer to aggregate TAPs and tools, and a high-performance 100G core packet broker to process and service the packets. While upgrades are possible, beware as they add cost yet may still not achieve true 100G line speeds when smart features centralize and share processing requirements at the core. Newer systems with a distributed/dedicated per-port processing architecture (versus shared central processing) are specifically designed to accommodate 100G line rates and eliminate these bottlenecks.

The overarching point is that desire for 100G performance cannot override the need for 100G visibility, or the entire network can suffer as a result. The visibility infrastructure needs to match the forwarding infrastructure. While 100G line rates are certainly possible with the latest monitoring equipment and software, IT teams must not assume that existing network visibility systems can keep up with the new load.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks

The Latest

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...