Application Performance Problems? It's Not Always the Network!
A primer on how to win the application versus network argument
August 15, 2014

Don Thomas Jacob

Share this

“It must be the network!” Network administrators hear this phrase all too often when an application is slow, data transfer is not fast enough or VoIP calls drop. Now, of course, the network is the underlying infrastructure all of these services run on, so if something does not work as expected it’s understandable that users more often than not place the blame on the network.

And sometimes that blame is rightfully placed on the network. It may indeed be that there isn’t enough bandwidth provisioned for the WAN, non-business traffic is hogging bandwidth, there are issues with high latency or there is incorrect or no QoS priority. Route flaps, the health of network devices or configuration mistakes can all also lead to application performance problems and are related to the network. Despite these potential problem areas, it is certainly not always the network that is to blame. The database, hardware and operating system are also common culprits. And believe it or not, a major cause of poor application performance can be the application itself.

Application performance issues stemming from the application can be caused by a number of different factors related to the design of the application and otherwise. For example, there could be too many elements or too much content in the application; it could be too chatty, making multiple connections for each user request; or it could be slow and long-running queries. Not to mention memory leak, thread lock or a bad database schema that is slowing down data retrieval. As a network administrator, though, try telling this to the application developer or systems administrator and more often than not you’ll find yourself engaged in an epic battle.

Sure, there are half as many reasons why the source of the issue could be the network, but that argument won’t fly. You’re going to have to prove it. Here a few of the common accusations developers and SysAdmins make and how you can be prepared to refute them:

“Hey, the network is just too slow”

Response: You should power up your network monitoring tool and check the health and status of your network devices. SNMP tools can provide a lot of useful information. For example, when monitoring your routers and switches with SNMP, you can see if there were route flaps, packet loss, an increase in RTT and latency, and if the device CPU or memory utilization is high.

“Maybe your WAN link can’t handle my app”

Response: Cisco IPSLA can send synthetic packets and report on the capability or the readiness of the network link to handle IP traffic with TCP and UDP protocols or report specifically about VoIP performance, RTT, etc. If the synthetic packets generated by Cisco IPSLA that match the application protocol can be handled, they should also be able to handle the actual application traffic.

“There’s just not enough bandwidth”

Response: There’s a tool for that too! NetFlow data from routing and switching devices can report on bandwidth usage telling you how much of your WAN link is being utilized, which applications are using it, what end-points are involved and even report on the ToS priority of each IP conversation.

“It’s got to be something to do with your QoS priorities”

Response: Using a monitoring tool that supports Cisco CBQoS reporting, you can validate the performance of your QoS policies — pre and post policy statistics, too much buffer and how much traffic is being dropped for each QoS policy and class.

If your QoS policies are working as expected, it’s time to tell your foe, “Nope, try again!”

“Well, it might not be any of those things, but it’s still definitely the network”

Response: When all else fails, the answer is deep packet inspection (DPI). The visibility that DPI provides is virtually unlimited throughput information, out of order segment details, handshake details, re-transmissions and almost any other information you will need to prove once and for all that it’s not the network, and also find out the actual cause for poor application performance so you can really rub it in.

In conclusion, with the right technology and tools, network administrators can prove that the network is not at fault, but equally important they can be proactive and ensure small, routine network issues don’t become major headaches to begin with.

Don Thomas Jacob is a Head Geek at SolarWinds.

Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...