Skip to main content

Poor Network Visibility and Failures Cost Government up to $1 Million Per Hour

Nik Koutsoukos

For more than half of Federal IT decision makers, it takes a day or more to detect and fix application performances. That is why there is a federal network visibility crisis.

The extent of the crisis was revealed by in a Riverbed-commissioned survey of federal IT decision makers conducted by Market Connections, which also found that only 17 percent are able to address and fix the issue within minutes. Due to a lack of insight into how their networks and applications are performing, agencies cannot immediately pinpoint and address problems.

This is increasingly important because agencies are rapidly moving to the cloud to consolidate their IT resources, as 45 percent of respondents reported that the jump to the cloud has caused increasing network complexity. The result is data travels farther distances across agency networks to reach defense and civilian workers that rely on that information.

Poor application performance directly impacts federal agency productivity and the costs associated with network outages can be staggering. Today the average cost of an enterprise application failure per hour is $500,000 to $1 million.

Many federal IT executives lack the manpower, budget and tools necessary to find and fix performance issues quickly and efficiently. Without the right tools to monitor network and application performance, federal IT professionals cannot pinpoint problems that directly impact agency or mission effectiveness. This means supply chain delays of materiel to warfighters in the field or lack of access to critical defense and global security applications.

Networks need to perform quickly and seamlessly in order to fulfill mission requirements. Performance monitoring tools provide the broadest, most comprehensive view into network activity, helping to ensure fast performance, high security, and rapid recovery.

More than two-thirds (68%) of respondents see improved network reliability as a key value of monitoring tools and more than three-quarters (77%) of respondents said automated investigation and diagnosis is an important feature in a network monitoring solution. With visibility across the entire network and its applications, IT departments can identify and fix problems in minutes — before end users notice, and before productivity and citizen services suffer.

Survey respondents shared which features are important in network monitoring solutions, providing a window into their thoughts about current issues. Those features, listed in order of importance, are capacity planning (79%), automated investigation (77%), application-aware visibility (65%), and predictive modeling (58%).

There are key benefits to improving network visibility. An agency will have improved network reliability, know about problems before end-users do, have improved network speed, have maximized employee productivity, and have insight into risk management/cyber threats as benefits of using network monitoring tools. In addition, the challenge of network complexity will no longer be an issue because IT executives will be able to see an agency’s whole network, allowing them to be proactive in not only fixing issues but avoiding them as well.

With today’s globally distributed federal workforce, network visibility is critical to monitoring performance, and identifying and quickly fixing problems. Using network monitoring tools is a critical step toward managing the complex network environment and ensuring transfers to the cloud are effective and beneficial experiences for the agency, the end users and, ultimately, the constituents.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

Poor Network Visibility and Failures Cost Government up to $1 Million Per Hour

Nik Koutsoukos

For more than half of Federal IT decision makers, it takes a day or more to detect and fix application performances. That is why there is a federal network visibility crisis.

The extent of the crisis was revealed by in a Riverbed-commissioned survey of federal IT decision makers conducted by Market Connections, which also found that only 17 percent are able to address and fix the issue within minutes. Due to a lack of insight into how their networks and applications are performing, agencies cannot immediately pinpoint and address problems.

This is increasingly important because agencies are rapidly moving to the cloud to consolidate their IT resources, as 45 percent of respondents reported that the jump to the cloud has caused increasing network complexity. The result is data travels farther distances across agency networks to reach defense and civilian workers that rely on that information.

Poor application performance directly impacts federal agency productivity and the costs associated with network outages can be staggering. Today the average cost of an enterprise application failure per hour is $500,000 to $1 million.

Many federal IT executives lack the manpower, budget and tools necessary to find and fix performance issues quickly and efficiently. Without the right tools to monitor network and application performance, federal IT professionals cannot pinpoint problems that directly impact agency or mission effectiveness. This means supply chain delays of materiel to warfighters in the field or lack of access to critical defense and global security applications.

Networks need to perform quickly and seamlessly in order to fulfill mission requirements. Performance monitoring tools provide the broadest, most comprehensive view into network activity, helping to ensure fast performance, high security, and rapid recovery.

More than two-thirds (68%) of respondents see improved network reliability as a key value of monitoring tools and more than three-quarters (77%) of respondents said automated investigation and diagnosis is an important feature in a network monitoring solution. With visibility across the entire network and its applications, IT departments can identify and fix problems in minutes — before end users notice, and before productivity and citizen services suffer.

Survey respondents shared which features are important in network monitoring solutions, providing a window into their thoughts about current issues. Those features, listed in order of importance, are capacity planning (79%), automated investigation (77%), application-aware visibility (65%), and predictive modeling (58%).

There are key benefits to improving network visibility. An agency will have improved network reliability, know about problems before end-users do, have improved network speed, have maximized employee productivity, and have insight into risk management/cyber threats as benefits of using network monitoring tools. In addition, the challenge of network complexity will no longer be an issue because IT executives will be able to see an agency’s whole network, allowing them to be proactive in not only fixing issues but avoiding them as well.

With today’s globally distributed federal workforce, network visibility is critical to monitoring performance, and identifying and quickly fixing problems. Using network monitoring tools is a critical step toward managing the complex network environment and ensuring transfers to the cloud are effective and beneficial experiences for the agency, the end users and, ultimately, the constituents.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...