Skip to main content

Network Visibility Essential in Today's Complex Networks

Mike Heumann

The complexity of modern enterprise networks is increasing due to data center consolidation, server virtualization/private cloud, compute layer virtualization, new application architectures, and the shift to dense 10Gb Ethernet (10GbE) or higher network speeds, and these factors necessitate deeper levels of network visibility to aid in the management and troubleshooting of these networks, according to an Emulex study of 150 IT professionals, conducted by Enterprise Strategy Group (ESG).

The study confirms that more than two-thirds (69%) of respondents stated that they expect the number of requests to capture network data (including metadata and packet-level data) to increase dramatically, driven by the needs of a variety of IT groups including network architecture, security, compliance, applications, and IT audit teams.

Key findings from the survey include:

· Network performance challenges are increasing, and result from the size, complexity and mobility of modern network environments. The number one indicated network performance challenge (43%) that respondents face is monitoring/managing network performance between groups of web, application, and database servers in the data center.

· The second most cited challenge by respondents is maintaining end-to-end network performance to endpoint devices connecting either via public networks (42%) or wide area networks (WAN) (35%). These challenges reflect a rapidly changing environment marked by centralized data centers and an increasingly mobile workforce, which requires extending the boundary of end-to-end management to mobile devices.

· Other challenges include tuning the network (33%), providing Quality of Service (QoS) based on traffic or application (27%), and understanding network latency (27%).

· Security challenges are increased when there is a lack of proper network visibility for incident detection and resolution. The most often cited challenges from respondents include the struggle to capture network behavior for incident detection (38%), monitoring network flows for anomalous behavior (35%), the ability to capture and analyze logs from network and security devices (29%), and the ability to establish a baseline of normal network behavior (27%).

· Organizations struggle with multiple network monitoring tools to capture network traffic and only see that number increasing in 2014. More than two-thirds (69%) of respondents stated that they expect the number of requests to capture network data (including metadata and packet-level data) to increase dramatically. Requests to capture network data are also now being initiated by the network architecture, security, compliance, and IT audit and application teams.

· More than half of organizations’ monitoring tools cannot cope with increased 10GbE network throughput. 54% of organizations find that they either sometimes or frequently cannot cope with the increased throughput or are dropping packets due to the increased throughput.

“The results of this survey point to exactly why enterprises need the ability to collect and monitor all network traffic - to improve network performance, security, and availability and to maintain regulatory compliance,” said Mike Riley ,SVP and GM, Endace division of Emulex. “The impact on the enterprise bottom line of network outages and security events is very large, and will only continue to grow. By implementing comprehensive network visibility architectures, organizations will be better prepared to ensure network performance, security, and compliance, and to dramatically reduce the time to find and fix critical problems.”

“Despite the challenges faced by organizations with rapidly growing and complex network environments, the ability to capture network data has never been more important. Network outages have proven to be disastrous from the cost of downtime alone – which can be millions of dollars per hour - not to mention the amount of dedicated resources it takes to identify root cause of these outages,” said Bob Laliberte, Senior Analyst, Enterprise Strategy Group. “Organizations need to ensure they have effective monitoring solutions in place that will enable them to maintain network availability in the face of increasing data center complexity.”

About the Study: The 150 IT professionals who participated in the study represent multiple industries (including financial, business services, manufacturing, and retail) and are responsible for evaluating, purchasing and managing network infrastructure technologies, as well as using network-based monitoring or management tools. All respondents were from enterprise organizations with 1,000 or more employees.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Network Visibility Essential in Today's Complex Networks

Mike Heumann

The complexity of modern enterprise networks is increasing due to data center consolidation, server virtualization/private cloud, compute layer virtualization, new application architectures, and the shift to dense 10Gb Ethernet (10GbE) or higher network speeds, and these factors necessitate deeper levels of network visibility to aid in the management and troubleshooting of these networks, according to an Emulex study of 150 IT professionals, conducted by Enterprise Strategy Group (ESG).

The study confirms that more than two-thirds (69%) of respondents stated that they expect the number of requests to capture network data (including metadata and packet-level data) to increase dramatically, driven by the needs of a variety of IT groups including network architecture, security, compliance, applications, and IT audit teams.

Key findings from the survey include:

· Network performance challenges are increasing, and result from the size, complexity and mobility of modern network environments. The number one indicated network performance challenge (43%) that respondents face is monitoring/managing network performance between groups of web, application, and database servers in the data center.

· The second most cited challenge by respondents is maintaining end-to-end network performance to endpoint devices connecting either via public networks (42%) or wide area networks (WAN) (35%). These challenges reflect a rapidly changing environment marked by centralized data centers and an increasingly mobile workforce, which requires extending the boundary of end-to-end management to mobile devices.

· Other challenges include tuning the network (33%), providing Quality of Service (QoS) based on traffic or application (27%), and understanding network latency (27%).

· Security challenges are increased when there is a lack of proper network visibility for incident detection and resolution. The most often cited challenges from respondents include the struggle to capture network behavior for incident detection (38%), monitoring network flows for anomalous behavior (35%), the ability to capture and analyze logs from network and security devices (29%), and the ability to establish a baseline of normal network behavior (27%).

· Organizations struggle with multiple network monitoring tools to capture network traffic and only see that number increasing in 2014. More than two-thirds (69%) of respondents stated that they expect the number of requests to capture network data (including metadata and packet-level data) to increase dramatically. Requests to capture network data are also now being initiated by the network architecture, security, compliance, and IT audit and application teams.

· More than half of organizations’ monitoring tools cannot cope with increased 10GbE network throughput. 54% of organizations find that they either sometimes or frequently cannot cope with the increased throughput or are dropping packets due to the increased throughput.

“The results of this survey point to exactly why enterprises need the ability to collect and monitor all network traffic - to improve network performance, security, and availability and to maintain regulatory compliance,” said Mike Riley ,SVP and GM, Endace division of Emulex. “The impact on the enterprise bottom line of network outages and security events is very large, and will only continue to grow. By implementing comprehensive network visibility architectures, organizations will be better prepared to ensure network performance, security, and compliance, and to dramatically reduce the time to find and fix critical problems.”

“Despite the challenges faced by organizations with rapidly growing and complex network environments, the ability to capture network data has never been more important. Network outages have proven to be disastrous from the cost of downtime alone – which can be millions of dollars per hour - not to mention the amount of dedicated resources it takes to identify root cause of these outages,” said Bob Laliberte, Senior Analyst, Enterprise Strategy Group. “Organizations need to ensure they have effective monitoring solutions in place that will enable them to maintain network availability in the face of increasing data center complexity.”

About the Study: The 150 IT professionals who participated in the study represent multiple industries (including financial, business services, manufacturing, and retail) and are responsible for evaluating, purchasing and managing network infrastructure technologies, as well as using network-based monitoring or management tools. All respondents were from enterprise organizations with 1,000 or more employees.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...