Skip to main content

3 Critical Network Tasks Made Easier by Scale-Out Packet Broker Technology

Alastair Hartrup

When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability.
 
Why does this happen?

At a fundamental level, most of today's packet broker technology is designed to scale up, not out. As I've mentioned in a previous article series, Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost, simply scaling up your network infrastructure at every growth point is a more complex and more expensive endeavor over time — cutting into business profitability and productivity. Instead, building network architectures that can scale out — quickly adding ports, updating features, and changing speeds or capabilities — is often a better approach.
 
Since day-in and day-out, most network managers spend a significant amount of time dealing with adds, moves and changes, let's look at the impact the scale-out approach can have.

Adds

The world of network adds has gotten more complex over the last several years with the rise of IoT, and the advances in network monitoring and security. The ability to properly capture packet information crossing the network and filter that data to management platforms, tools and appliances is a core function of every packet broker. But with networks continuing to grow at a breakneck pace and monitoring solutions requiring more and more data and bandwidth, keeping up with adds can be a real challenge.

Because teams often have trouble predicting growth, IT traditionally errs on the side of caution and defaults to deploying larger-than-required systems with a high-level of port availability. Vendors have recognized this fear and sell monolithic solutions that require large capital investments. While this scale up approach allows IT to stay in the same vendor product family with similar operational characteristics, managing adds in this manner is often wasteful and expensive. Letting infrastructure sit idle rarely makes business sense. Therefore, deploying technology that allows a team to scale out and add extension units incrementally can help meet immediate needs and support long-term growth.

Moves

One of the top network trends for 2019 is a shift towards edge computing. Monolithic data centers are moving to hybrid clouds with more computing being done at the edge. As a result, managing moves today involves much more than changing the rack location of a server. There are more physical network locations to manage as more data and applications move out of the core network to distributed sites. Network managers need flexibility when connecting monitoring and security devices to remote locations (to be closer to where access and applications are moving).

To deal with this shift, IT teams often buy multiple smaller systems and change them out as each remote location grows beyond its initial capacity. Many infrastructure vendors offer a path from small business, to enterprise, to carrier class packet brokers for end-to-end visibility. The problem is that small, fixed configuration systems lack the flexibility to adapt to the new paradigm of constantly shifting computing resources and requirements. Again, organizations fall back to the "buy-bigger than needed" option, which becomes even more wasteful when applied to multiple locations.

To overcome this, there's an opportunity to shift toward the scale-out approach and provide only the ports needed for the size and scale of each location. Growth, when needed, can then be accommodated with modular port additions through extension units. Further, port flexibility allows simple re-deployment covering a variety of speeds and media found in divergent remote locations. This approach provides only what is needed at each site, while still having flexibility to adjust for future growth. Managing moves within the data center or out at the edge is less disruptive and more budget friendly with a scale-out architecture.

Changes

There is one thing that network managers can count on – the network will change. Planning for and managing change is a fundamental aspect of a network manager's job. Managing network changes also requires managing visibility changes. Monitoring, security and performance management tools require a level of visibility that can keep pace with network changes, and packet brokers can provide the bridge to connect and manage them through the sea of constant change.

However, as changes are required in speed, media or features, IT teams have the option to buy new boxes with upgraded bandwidth capabilities or purchase additional feature licenses to manage network complexities that arise from change. These options provide cash flow for vendors, but make network changes complicated and expensive. Many annual software and feature license fees continue as long as the system is in service. IT can overcome these issues by simply re-purposing existing flex ports when speed or media changes. That means utilizing packet broker solutions that offer flexible speed ports (like 1/10/25Gbps or 40/100Gbps). 

The networking evolution is being driven by new technology, business growth, security concerns, budget requirements and new trends in overall IT architectures. As these trends play out, the ability to efficiently manage network adds, moves and changes will be critical to increasing availability, enhancing services, protecting infrastructure and maintaining budget discipline. Understanding the role of the packet broker, and the critical differences between scale up architectures and scale-out technology, can be key to ensuring your NetOps team's not wasting time and money.

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

3 Critical Network Tasks Made Easier by Scale-Out Packet Broker Technology

Alastair Hartrup

When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability.
 
Why does this happen?

At a fundamental level, most of today's packet broker technology is designed to scale up, not out. As I've mentioned in a previous article series, Why "Scaling Up" Your Network Infrastructure Always Leads to More Complexity and Cost, simply scaling up your network infrastructure at every growth point is a more complex and more expensive endeavor over time — cutting into business profitability and productivity. Instead, building network architectures that can scale out — quickly adding ports, updating features, and changing speeds or capabilities — is often a better approach.
 
Since day-in and day-out, most network managers spend a significant amount of time dealing with adds, moves and changes, let's look at the impact the scale-out approach can have.

Adds

The world of network adds has gotten more complex over the last several years with the rise of IoT, and the advances in network monitoring and security. The ability to properly capture packet information crossing the network and filter that data to management platforms, tools and appliances is a core function of every packet broker. But with networks continuing to grow at a breakneck pace and monitoring solutions requiring more and more data and bandwidth, keeping up with adds can be a real challenge.

Because teams often have trouble predicting growth, IT traditionally errs on the side of caution and defaults to deploying larger-than-required systems with a high-level of port availability. Vendors have recognized this fear and sell monolithic solutions that require large capital investments. While this scale up approach allows IT to stay in the same vendor product family with similar operational characteristics, managing adds in this manner is often wasteful and expensive. Letting infrastructure sit idle rarely makes business sense. Therefore, deploying technology that allows a team to scale out and add extension units incrementally can help meet immediate needs and support long-term growth.

Moves

One of the top network trends for 2019 is a shift towards edge computing. Monolithic data centers are moving to hybrid clouds with more computing being done at the edge. As a result, managing moves today involves much more than changing the rack location of a server. There are more physical network locations to manage as more data and applications move out of the core network to distributed sites. Network managers need flexibility when connecting monitoring and security devices to remote locations (to be closer to where access and applications are moving).

To deal with this shift, IT teams often buy multiple smaller systems and change them out as each remote location grows beyond its initial capacity. Many infrastructure vendors offer a path from small business, to enterprise, to carrier class packet brokers for end-to-end visibility. The problem is that small, fixed configuration systems lack the flexibility to adapt to the new paradigm of constantly shifting computing resources and requirements. Again, organizations fall back to the "buy-bigger than needed" option, which becomes even more wasteful when applied to multiple locations.

To overcome this, there's an opportunity to shift toward the scale-out approach and provide only the ports needed for the size and scale of each location. Growth, when needed, can then be accommodated with modular port additions through extension units. Further, port flexibility allows simple re-deployment covering a variety of speeds and media found in divergent remote locations. This approach provides only what is needed at each site, while still having flexibility to adjust for future growth. Managing moves within the data center or out at the edge is less disruptive and more budget friendly with a scale-out architecture.

Changes

There is one thing that network managers can count on – the network will change. Planning for and managing change is a fundamental aspect of a network manager's job. Managing network changes also requires managing visibility changes. Monitoring, security and performance management tools require a level of visibility that can keep pace with network changes, and packet brokers can provide the bridge to connect and manage them through the sea of constant change.

However, as changes are required in speed, media or features, IT teams have the option to buy new boxes with upgraded bandwidth capabilities or purchase additional feature licenses to manage network complexities that arise from change. These options provide cash flow for vendors, but make network changes complicated and expensive. Many annual software and feature license fees continue as long as the system is in service. IT can overcome these issues by simply re-purposing existing flex ports when speed or media changes. That means utilizing packet broker solutions that offer flexible speed ports (like 1/10/25Gbps or 40/100Gbps). 

The networking evolution is being driven by new technology, business growth, security concerns, budget requirements and new trends in overall IT architectures. As these trends play out, the ability to efficiently manage network adds, moves and changes will be critical to increasing availability, enhancing services, protecting infrastructure and maintaining budget discipline. Understanding the role of the packet broker, and the critical differences between scale up architectures and scale-out technology, can be key to ensuring your NetOps team's not wasting time and money.

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...