Skip to main content

Application Performance Problems? It's Not Always the Network!

A primer on how to win the application versus network argument
Don Thomas Jacob

“It must be the network!” Network administrators hear this phrase all too often when an application is slow, data transfer is not fast enough or VoIP calls drop. Now, of course, the network is the underlying infrastructure all of these services run on, so if something does not work as expected it’s understandable that users more often than not place the blame on the network.

And sometimes that blame is rightfully placed on the network. It may indeed be that there isn’t enough bandwidth provisioned for the WAN, non-business traffic is hogging bandwidth, there are issues with high latency or there is incorrect or no QoS priority. Route flaps, the health of network devices or configuration mistakes can all also lead to application performance problems and are related to the network. Despite these potential problem areas, it is certainly not always the network that is to blame. The database, hardware and operating system are also common culprits. And believe it or not, a major cause of poor application performance can be the application itself.

Application performance issues stemming from the application can be caused by a number of different factors related to the design of the application and otherwise. For example, there could be too many elements or too much content in the application; it could be too chatty, making multiple connections for each user request; or it could be slow and long-running queries. Not to mention memory leak, thread lock or a bad database schema that is slowing down data retrieval. As a network administrator, though, try telling this to the application developer or systems administrator and more often than not you’ll find yourself engaged in an epic battle.

Sure, there are half as many reasons why the source of the issue could be the network, but that argument won’t fly. You’re going to have to prove it. Here a few of the common accusations developers and SysAdmins make and how you can be prepared to refute them:

“Hey, the network is just too slow”

Response: You should power up your network monitoring tool and check the health and status of your network devices. SNMP tools can provide a lot of useful information. For example, when monitoring your routers and switches with SNMP, you can see if there were route flaps, packet loss, an increase in RTT and latency, and if the device CPU or memory utilization is high.

“Maybe your WAN link can’t handle my app”

Response: Cisco IPSLA can send synthetic packets and report on the capability or the readiness of the network link to handle IP traffic with TCP and UDP protocols or report specifically about VoIP performance, RTT, etc. If the synthetic packets generated by Cisco IPSLA that match the application protocol can be handled, they should also be able to handle the actual application traffic.

“There’s just not enough bandwidth”

Response: There’s a tool for that too! NetFlow data from routing and switching devices can report on bandwidth usage telling you how much of your WAN link is being utilized, which applications are using it, what end-points are involved and even report on the ToS priority of each IP conversation.

“It’s got to be something to do with your QoS priorities”

Response: Using a monitoring tool that supports Cisco CBQoS reporting, you can validate the performance of your QoS policies — pre and post policy statistics, too much buffer and how much traffic is being dropped for each QoS policy and class.

If your QoS policies are working as expected, it’s time to tell your foe, “Nope, try again!”

“Well, it might not be any of those things, but it’s still definitely the network”

Response: When all else fails, the answer is deep packet inspection (DPI). The visibility that DPI provides is virtually unlimited throughput information, out of order segment details, handshake details, re-transmissions and almost any other information you will need to prove once and for all that it’s not the network, and also find out the actual cause for poor application performance so you can really rub it in.

In conclusion, with the right technology and tools, network administrators can prove that the network is not at fault, but equally important they can be proactive and ensure small, routine network issues don’t become major headaches to begin with.

Don Thomas Jacob is a Head Geek at SolarWinds.

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Application Performance Problems? It's Not Always the Network!

A primer on how to win the application versus network argument
Don Thomas Jacob

“It must be the network!” Network administrators hear this phrase all too often when an application is slow, data transfer is not fast enough or VoIP calls drop. Now, of course, the network is the underlying infrastructure all of these services run on, so if something does not work as expected it’s understandable that users more often than not place the blame on the network.

And sometimes that blame is rightfully placed on the network. It may indeed be that there isn’t enough bandwidth provisioned for the WAN, non-business traffic is hogging bandwidth, there are issues with high latency or there is incorrect or no QoS priority. Route flaps, the health of network devices or configuration mistakes can all also lead to application performance problems and are related to the network. Despite these potential problem areas, it is certainly not always the network that is to blame. The database, hardware and operating system are also common culprits. And believe it or not, a major cause of poor application performance can be the application itself.

Application performance issues stemming from the application can be caused by a number of different factors related to the design of the application and otherwise. For example, there could be too many elements or too much content in the application; it could be too chatty, making multiple connections for each user request; or it could be slow and long-running queries. Not to mention memory leak, thread lock or a bad database schema that is slowing down data retrieval. As a network administrator, though, try telling this to the application developer or systems administrator and more often than not you’ll find yourself engaged in an epic battle.

Sure, there are half as many reasons why the source of the issue could be the network, but that argument won’t fly. You’re going to have to prove it. Here a few of the common accusations developers and SysAdmins make and how you can be prepared to refute them:

“Hey, the network is just too slow”

Response: You should power up your network monitoring tool and check the health and status of your network devices. SNMP tools can provide a lot of useful information. For example, when monitoring your routers and switches with SNMP, you can see if there were route flaps, packet loss, an increase in RTT and latency, and if the device CPU or memory utilization is high.

“Maybe your WAN link can’t handle my app”

Response: Cisco IPSLA can send synthetic packets and report on the capability or the readiness of the network link to handle IP traffic with TCP and UDP protocols or report specifically about VoIP performance, RTT, etc. If the synthetic packets generated by Cisco IPSLA that match the application protocol can be handled, they should also be able to handle the actual application traffic.

“There’s just not enough bandwidth”

Response: There’s a tool for that too! NetFlow data from routing and switching devices can report on bandwidth usage telling you how much of your WAN link is being utilized, which applications are using it, what end-points are involved and even report on the ToS priority of each IP conversation.

“It’s got to be something to do with your QoS priorities”

Response: Using a monitoring tool that supports Cisco CBQoS reporting, you can validate the performance of your QoS policies — pre and post policy statistics, too much buffer and how much traffic is being dropped for each QoS policy and class.

If your QoS policies are working as expected, it’s time to tell your foe, “Nope, try again!”

“Well, it might not be any of those things, but it’s still definitely the network”

Response: When all else fails, the answer is deep packet inspection (DPI). The visibility that DPI provides is virtually unlimited throughput information, out of order segment details, handshake details, re-transmissions and almost any other information you will need to prove once and for all that it’s not the network, and also find out the actual cause for poor application performance so you can really rub it in.

In conclusion, with the right technology and tools, network administrators can prove that the network is not at fault, but equally important they can be proactive and ensure small, routine network issues don’t become major headaches to begin with.

Don Thomas Jacob is a Head Geek at SolarWinds.

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...