The 5 Most Common Application Bottlenecks
March 30, 2017

Sven Hammar
Apica

Share this

Application bottlenecks can lead an otherwise functional computer or server to slow down to a crawl. The term "bottleneck" refers to both an overloaded network and the state of a computing device in which one component is unable to keep pace with the rest of the system, thus slowing overall performance.
 
Addressing bottleneck issues usually results in returning the system to operable performance levels; however, fixing bottleneck issues requires first identifying the underperforming component. These five bottleneck causes are among the most common:
 

1. CPU Utilization

 
According to Microsoft, "processor bottlenecks occur when the processor is so busy that it cannot respond to requests for time." Simply put, the central processing unit (CPU) is overloaded and unable to perform tasks in a timely manner.
 
CPU bottleneck shows up in two forms: a processor running at over 80 percent capacity for an extended period of time, and an overly long processor queue. CPU utilization bottlenecks often stem from insufficient system memory and continual interruption from input/output devices. Resolving these issues involves increasing CPU power, adding more random access memory (RAM), and improving software coding efficiency.
 

2. Memory Utilization

 
A memory bottleneck implies that the system does not have sufficient or fast enough RAM. This situation cuts the speed at which the RAM can serve information to the CPU, which slows overall operations. In cases where the system doesn’t have enough memory, the computer will start offloading storage to a significantly slower hard disc drive (HDD) or solid state drive (SSD) to keep things running. Alternatively, if the RAM cannot serve data to the CPU fast enough, the device will experience both slowdown and low CPU usage rates.
 
Resolving the issue typically involves installing higher capacity and/or faster RAM. In cases where the existing RAM is too slow, it needs to be replaced, whereas capacity bottlenecks can be dealt with simply by adding more memory. In other cases, the problem may stem from a programming error called a "memory leak," which means a program is not releasing memory for system use again when done using it. Resolving this issue requires a program fix.
 

3. Network Utilization

 
Network bottlenecks occur when the communication between two devices lacks the necessary bandwidth or processing power to complete a task quickly. According to Microsoft, network bottlenecks occur when there’s an overloaded server, an overburdened network communication device, and when the network itself loses integrity. Resolving network utilization issues typically involves upgrading or adding servers, as well as upgrading network hardware like routers, hubs, and access points.

4. Software Limitation

 
Sometimes bottleneck-related performance dips originate from the software itself. In some cases, programs can be built to handle only a finite number of tasks at once so the program won’t utilize any additional CPU or RAM assets even when available.
 
The most common cases of application problems are transactions that load the database and/or different system resources: static content, authentication, connections pools etc in way that is not optimized. I many cases configurations of application environments such as web server etc are done with default settings that respond poorly versus peak load traffic.
 

5. Disk Usage

 
The slowest component inside a computer or server is typically the long-term storage, which includes HDDs and SSDs, and is often an unavoidable bottleneck. Even the fastest long-term storage solutions have physical speed limits, making this bottleneck cause one of the more difficult ones to troubleshoot. In many cases, disk usage speed can improve by reducing fragmentation issues and increasing data caching rates in RAM. On a physical level, address insufficient bandwidth by switching to faster storage devices and expanding RAID (a data storage virtualization technology) configurations.
 
Load testing and monitoring tools are excellent at identifying bottleneck problems that hinder performance. Use these tools to optimize your business’s online platforms.

Sven Hammar is Chief Strategy Officer and Founder of Apica
Share this

The Latest

May 20, 2019

In today's competitive landscape, businesses must have the ability and process in place to face new challenges and find ways to successfully tackle them in a proactive manner. For years, this has been placed on the shoulders of DevOps teams within IT departments. But, as automation takes over manual intervention to increase speed and efficiency, these teams are facing what we know as IT digitization. How has this changed the way companies function over the years, and what do we have to look forward to in the coming years? ...

May 16, 2019

Although the vast majority of IT organizations have implemented a broad variety of systems and tools to modernize, simplify and streamline data center operations, many are still burdened by inefficiencies, security risks and performance gaps in their IT infrastructure as well as the excessive time it takes to manage legacy infrastructure, according to the State of IT Transformation, a report from Datrium ...

May 15, 2019

When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability. Why does this happen? ...

May 14, 2019

Data may be pouring into enterprises but IT professionals still find most of it stuck in siloed departments and weeks away from being able to drive any valued action. Coupled with the ongoing concerns over security responsiveness, IT teams have to push aside other important performance-oriented data in order to ensure security data, at least, gets prominent attention. A new survey by Ivanti shows the disconnect between enterprise departments struggling to improve operations like automation while being challenged with a siloed structure and a data onslaught ...

May 13, 2019

A subtle, deliberate shift has occurred within the software industry which, at present, only the most innovative organizations have seized upon for competitive advantage. Although primarily driven by Artificial Intelligence (AI), this transformation strikes at the core of the most pervasive IT resources including cloud computing and predictive analytics ...

May 09, 2019

When asked who is mandated with developing and delivering their organization's digital competencies, 51% of respondents say their IT departments have a leadership role. The critical question is whether IT departments are prepared to take on a leadership role in which collaborating with other functions and disseminating knowledge and digital performance data are requirements ...

May 08, 2019

The Economist Intelligence Unit just released a new study commissioned by Riverbed that explores nine digital competencies that help organizations improve their digital performance and, ultimately, achieve their objectives. Here's a brief summary of 7 key research findings you'll find covered in detail in the report ...

May 07, 2019

Today, the overall customer scenario has digitally transformed and practically there is no limitation to the ways in which the target customers can be reached. These opportunities are throwing multiple challenges for brands and enterprises, and one of the prominent ones is to ensure Omni Channel experience for customers ...

May 06, 2019

Most businesses (92 percent of respondents) see the potential value of data and 36 percent are already monetizing their data, according to the Global Data Protection Index from Dell EMC. While this acknowledgement is positive, however, most respondents are struggling to properly protect their data ...

May 02, 2019

IT practitioners are still in experimentation mode with artificial intelligence in many cases, and still have concerns about how credible the technology can be. A recent study from OpsRamp targeted these IT managers who have implemented AIOps, and among other data, reports on the primary concerns of this new approach to operations management ...