Skip to main content

Application Performance Starts with the Database

Gerardo Dada

Whether they know it or not, every department, group or function within a business, from marketing to sales to the executive leadership, relies on a database in some way or another.

How? Applications are the heart of all critical business functions and an essential component to nearly every end user’s job, affecting productivity, end user satisfaction and ultimately revenue. In fact, in a recent SolarWinds survey of business end users, 93 percent of respondents said application performance and availability affects their ability to do their job, with 62 percent saying it is absolutely critical.

And at the heart of nearly every application is a database.

This means when an application performance or availability problem arises — something end users have little patience for (67 percent of end users also said they expect IT to resolve such issues within an hour or less) — there’s a good chance it’s associated with the underlying database’s performance. So, to not only keep end users happy, but productivity and revenue humming, application performance should be a paramount concern, and database performance must be a key element of that concern.

As validation of this point, another SolarWinds survey found that 71 percent of IT pros agree that a significant amount of application performance issues are related to databases and that application performance should start with the database. In addition, 71 percent also said application response time is a primary challenge they are trying to solve by improving database performance.

Why? There are three primary reasons. First, database engines are very complex — from complex queries and execution plans to replication and the inner workings of the database.

Second, there is a shortage of skilled performance-oriented database administrators (DBAs) in the market, resulting in many “accidental DBAs”.

Finally, compute and network resources can easily scale vertically or horizontally with today’s virtualization and cloud technologies, the same cannot be said for databases.

If you’re experiencing database-related application performance issues, or simply want to improve application performance by optimizing underlying databases, you should consider the following tips.

1. Get a full view of the application stack

The days of discrete monitoring tools are over. The use of tools that provide visibility across the entire application stack, or the application delivery chain comprised of the application and all the backend IT that supports it — software, middleware and extended infrastructure required for performance, especially the database — is a must in today’s complex, highly interconnected IT environment.

2. Be proactive and align the team behind end user experience

No one likes fighting fires. One way to minimize them is to be proactive and look at performance continuously, not only when it becomes a major problem. The entire team supporting applications should understand end user experience goals in terms of page-load and response times so it becomes a shared objective with very concrete business impact. Without a scoreboard everyone can see, it’s hard to know if you are winning.

3. Stop guessing

It’s not uncommon to default to adding hardware to hopefully improve performance — for example, switching to SSD drives. However, this is a gamble that has cost more than a few people their jobs. If the bottleneck is memory or a really bad SQL query, investing in SSD drives is unlikely to improve application performance.

4. Go beyond traditional monitoring

Most traditional monitoring tools focus on health and status, providing many charts and a lot of data, most of which is hard to interpret and time consuming to tease out performance insights from. Instead, tools with wait-time analysis capability can help identify how an application request is executed step-by-step, and what the processes and resources are that the application is waiting on. It provides a different view into performance, one that is more actionable than traditional infrastructure dashboards.

5. Establish baselines

It’s important to establish historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, to detect any anomalies before they become larger problems. By so doing, if a variation is identified, it’s much easier to track the code, resource or configuration change that could be the root cause.

6. Get everyone on the same page

Today’s complex applications are supported by an entire stack of technologies that is only as good as its weakest link. And yet, most IT operations teams are organized in silos, each person or group supporting a part of the stack. To avoid finger pointing, get every team in the organization a unified view of application performance, ideally based on wait-time analysis, so everyone can focus on solving application problems quickly.

As the backbone of nearly all business-critical applications, the impact of database performance on application performance cannot be underestimated. Following these tips can help eliminate many potential bottlenecks.

Gerardo Dada is VP Product Marketing and Strategy at SolarWinds.

APM

The Latest

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Application Performance Starts with the Database

Gerardo Dada

Whether they know it or not, every department, group or function within a business, from marketing to sales to the executive leadership, relies on a database in some way or another.

How? Applications are the heart of all critical business functions and an essential component to nearly every end user’s job, affecting productivity, end user satisfaction and ultimately revenue. In fact, in a recent SolarWinds survey of business end users, 93 percent of respondents said application performance and availability affects their ability to do their job, with 62 percent saying it is absolutely critical.

And at the heart of nearly every application is a database.

This means when an application performance or availability problem arises — something end users have little patience for (67 percent of end users also said they expect IT to resolve such issues within an hour or less) — there’s a good chance it’s associated with the underlying database’s performance. So, to not only keep end users happy, but productivity and revenue humming, application performance should be a paramount concern, and database performance must be a key element of that concern.

As validation of this point, another SolarWinds survey found that 71 percent of IT pros agree that a significant amount of application performance issues are related to databases and that application performance should start with the database. In addition, 71 percent also said application response time is a primary challenge they are trying to solve by improving database performance.

Why? There are three primary reasons. First, database engines are very complex — from complex queries and execution plans to replication and the inner workings of the database.

Second, there is a shortage of skilled performance-oriented database administrators (DBAs) in the market, resulting in many “accidental DBAs”.

Finally, compute and network resources can easily scale vertically or horizontally with today’s virtualization and cloud technologies, the same cannot be said for databases.

If you’re experiencing database-related application performance issues, or simply want to improve application performance by optimizing underlying databases, you should consider the following tips.

1. Get a full view of the application stack

The days of discrete monitoring tools are over. The use of tools that provide visibility across the entire application stack, or the application delivery chain comprised of the application and all the backend IT that supports it — software, middleware and extended infrastructure required for performance, especially the database — is a must in today’s complex, highly interconnected IT environment.

2. Be proactive and align the team behind end user experience

No one likes fighting fires. One way to minimize them is to be proactive and look at performance continuously, not only when it becomes a major problem. The entire team supporting applications should understand end user experience goals in terms of page-load and response times so it becomes a shared objective with very concrete business impact. Without a scoreboard everyone can see, it’s hard to know if you are winning.

3. Stop guessing

It’s not uncommon to default to adding hardware to hopefully improve performance — for example, switching to SSD drives. However, this is a gamble that has cost more than a few people their jobs. If the bottleneck is memory or a really bad SQL query, investing in SSD drives is unlikely to improve application performance.

4. Go beyond traditional monitoring

Most traditional monitoring tools focus on health and status, providing many charts and a lot of data, most of which is hard to interpret and time consuming to tease out performance insights from. Instead, tools with wait-time analysis capability can help identify how an application request is executed step-by-step, and what the processes and resources are that the application is waiting on. It provides a different view into performance, one that is more actionable than traditional infrastructure dashboards.

5. Establish baselines

It’s important to establish historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, to detect any anomalies before they become larger problems. By so doing, if a variation is identified, it’s much easier to track the code, resource or configuration change that could be the root cause.

6. Get everyone on the same page

Today’s complex applications are supported by an entire stack of technologies that is only as good as its weakest link. And yet, most IT operations teams are organized in silos, each person or group supporting a part of the stack. To avoid finger pointing, get every team in the organization a unified view of application performance, ideally based on wait-time analysis, so everyone can focus on solving application problems quickly.

As the backbone of nearly all business-critical applications, the impact of database performance on application performance cannot be underestimated. Following these tips can help eliminate many potential bottlenecks.

Gerardo Dada is VP Product Marketing and Strategy at SolarWinds.

APM

The Latest

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...