Skip to main content

NetScout Introduces nGenius 1500 Series Packet Flow Switch

NetScout Systems, announced the availability of the nGenius 1500 series packet flow switch, a high-performance, ultra-low latency network monitoring switch that enables enterprise IT organizations and service providers to cost-effectively aggregate, filter and distribute network traffic to the nGenius Service Assurance Solution and other monitoring, compliance and security tools.

As the first announced product resulting from NetScout’s acquisition of Simena, the new nGenius 1500 series packet flow switch further extends NetScout’s pervasive visibility capabilities and strengthens its Unified Service Delivery Management strategy by aggregating network traffic and distributing the right packet-flow data to the right monitoring tools at the right time.

Further, NetScout’s entrance into the network monitoring switch market significantly changes the market landscape and empowers IT organizations to simplify and consolidate their network monitoring architecture with a single-vendor solution, thus reducing vendor complexity and leading to a lower total cost of ownership.

IT organizations are increasingly leveraging valuable packet-flow data to support a wide range of management, compliance and security monitoring activities. The fundamental role of a network monitoring switch is to share access to critical network links and efficiently distribute packet-flow data across a diverse range of tools and devices, such as performance and service management, security (IPS, DLP) and other analysis, compliance and monitoring tools, rather than requiring a separate network connection for each device.

In addition, since these various monitoring and security tools typically have different requirements for the type of network traffic they receive, leveraging an intelligent network monitoring switch that can filter and condition traffic will streamline delivery and reduce the processing burden on the receiving device, thus lowering the cost of accessing the data.

“Since it was established in 2003, the traffic monitoring switching market has been standalone, requiring customers to use alternate vendors for tooling and traffic visibility. These technologies are critical for the increasing need for packet data visibility to enable better utilization and a longer life for the monitoring and security tools that are connected to them, due to their ability to deliver a subset of the total traffic in an intelligent manner,” said Jonah Kowall, Research Director of IT Operations, Gartner.

“With the new pressures to support increasing Ethernet capacities and traffic flows, Gartner expects traffic monitoring switches to be standard for many new 10 Gigabit Ethernet network architectures and we believe that it makes sense for performance management vendors to add this to their solution offerings.”

Built from the same low-latency switching silicon used in several high-performance data center networking platforms, the nGenius 1500 series packet flow switch combines the flexibility, power, scale and intelligence needed to support the most demanding network monitoring environments.

The single rack unit (1 RU) device provides up to 24-ports of 10 Gigabit Ethernet (GbE) with the industry’s lowest sustained latency of 650 nanoseconds. Each of the 24-ports can also accommodate a 1 GbE connection through the use of small form-factor pluggable connectors (SFP), and can be dynamically configured for either incoming or outgoing traffic flows. A powerful 240 Gbps non-blocking switch fabric delivers layer 2-4 wire-speed intelligent packet filtering, port tagging and load balancing capabilities; all service options can be enabled simultaneously on each port with no performance impact.

The nGenius 1500 series packet flow switch provides powerful aggregation and intelligent filtering capabilities to collect IP packet-flow traffic from a single network connection, or monitoring point, and distributes the flows to a diverse range of management and security tools, including nGenius InfiniStream appliance and nGenius Voice l Video Engine appliance.

Monitored traffic can be optimized and conditioned to deliver only the relevant data from a particular packet-flow or user session to improve end-device performance. This enables the IT organization to simplify and improve its pervasive visibility across the service delivery environment into business applications and unified communication services to understand service integrity, transaction performance and the user experience.

“Packet-flow data has been established as a rich and definitive source of network and application intelligence for a growing number of management applications, providing a granular window into service flows, interrelationships and the user experience,” said Steven Shalita, VP, Marketing at NetScout.

“With the introduction of the nGenius 1500 series packet flow switch, NetScout has taken a leadership position as the first established service management vendor to offer a network monitoring switch. Consequently, enterprise and service provider organizations can now look to a single vendor for a comprehensive and cost-effective access-to-analysis solution, resulting in a more efficient and simplified design of the monitoring environment. For IT organizations, this means reducing vendor complexity and simplifying support, while minimizing additional access-layer planning, procurement, and deployment activities, thus lowering the overall total cost of ownership.”

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

NetScout Introduces nGenius 1500 Series Packet Flow Switch

NetScout Systems, announced the availability of the nGenius 1500 series packet flow switch, a high-performance, ultra-low latency network monitoring switch that enables enterprise IT organizations and service providers to cost-effectively aggregate, filter and distribute network traffic to the nGenius Service Assurance Solution and other monitoring, compliance and security tools.

As the first announced product resulting from NetScout’s acquisition of Simena, the new nGenius 1500 series packet flow switch further extends NetScout’s pervasive visibility capabilities and strengthens its Unified Service Delivery Management strategy by aggregating network traffic and distributing the right packet-flow data to the right monitoring tools at the right time.

Further, NetScout’s entrance into the network monitoring switch market significantly changes the market landscape and empowers IT organizations to simplify and consolidate their network monitoring architecture with a single-vendor solution, thus reducing vendor complexity and leading to a lower total cost of ownership.

IT organizations are increasingly leveraging valuable packet-flow data to support a wide range of management, compliance and security monitoring activities. The fundamental role of a network monitoring switch is to share access to critical network links and efficiently distribute packet-flow data across a diverse range of tools and devices, such as performance and service management, security (IPS, DLP) and other analysis, compliance and monitoring tools, rather than requiring a separate network connection for each device.

In addition, since these various monitoring and security tools typically have different requirements for the type of network traffic they receive, leveraging an intelligent network monitoring switch that can filter and condition traffic will streamline delivery and reduce the processing burden on the receiving device, thus lowering the cost of accessing the data.

“Since it was established in 2003, the traffic monitoring switching market has been standalone, requiring customers to use alternate vendors for tooling and traffic visibility. These technologies are critical for the increasing need for packet data visibility to enable better utilization and a longer life for the monitoring and security tools that are connected to them, due to their ability to deliver a subset of the total traffic in an intelligent manner,” said Jonah Kowall, Research Director of IT Operations, Gartner.

“With the new pressures to support increasing Ethernet capacities and traffic flows, Gartner expects traffic monitoring switches to be standard for many new 10 Gigabit Ethernet network architectures and we believe that it makes sense for performance management vendors to add this to their solution offerings.”

Built from the same low-latency switching silicon used in several high-performance data center networking platforms, the nGenius 1500 series packet flow switch combines the flexibility, power, scale and intelligence needed to support the most demanding network monitoring environments.

The single rack unit (1 RU) device provides up to 24-ports of 10 Gigabit Ethernet (GbE) with the industry’s lowest sustained latency of 650 nanoseconds. Each of the 24-ports can also accommodate a 1 GbE connection through the use of small form-factor pluggable connectors (SFP), and can be dynamically configured for either incoming or outgoing traffic flows. A powerful 240 Gbps non-blocking switch fabric delivers layer 2-4 wire-speed intelligent packet filtering, port tagging and load balancing capabilities; all service options can be enabled simultaneously on each port with no performance impact.

The nGenius 1500 series packet flow switch provides powerful aggregation and intelligent filtering capabilities to collect IP packet-flow traffic from a single network connection, or monitoring point, and distributes the flows to a diverse range of management and security tools, including nGenius InfiniStream appliance and nGenius Voice l Video Engine appliance.

Monitored traffic can be optimized and conditioned to deliver only the relevant data from a particular packet-flow or user session to improve end-device performance. This enables the IT organization to simplify and improve its pervasive visibility across the service delivery environment into business applications and unified communication services to understand service integrity, transaction performance and the user experience.

“Packet-flow data has been established as a rich and definitive source of network and application intelligence for a growing number of management applications, providing a granular window into service flows, interrelationships and the user experience,” said Steven Shalita, VP, Marketing at NetScout.

“With the introduction of the nGenius 1500 series packet flow switch, NetScout has taken a leadership position as the first established service management vendor to offer a network monitoring switch. Consequently, enterprise and service provider organizations can now look to a single vendor for a comprehensive and cost-effective access-to-analysis solution, resulting in a more efficient and simplified design of the monitoring environment. For IT organizations, this means reducing vendor complexity and simplifying support, while minimizing additional access-layer planning, procurement, and deployment activities, thus lowering the overall total cost of ownership.”

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...