Skip to main content

20 Top Factors That Impact Website Response Time - Part 4

APMdigest asked industry experts – from analysts and consultants to the top vendors – to outline the most important factors that impact website response time. The last installment of the list, featuring factors 16–20, presents various factors you may not have considered.

Start with Part 1 of "20 Top Factors That Impact Website Response Time"

Start with Part 2 of "20 Top Factors That Impact Website Response Time"

Start with Part 3 of "20 Top Factors That Impact Website Response Time"

16. IT INFRASTRUCTURE CHANGES

As new servers are powered on, database configurations changed, shared storage reconfigured, VMs reallocated, along with a whole host of other everyday infrastructure changes, little does the IT admin know how the upstream effects of these changes may be impacting web response times and the company’s bottom line. Consider this example: A storage (disk) change is made. The change slows a group of VMs. One of those VMs supports a database and therefore slows its queries. Let’s say those queries support the e-commerce application servers. As a result 60% of the users of this application experience slower responses. Or let’s say an overzealous VM admin observes that certain hosts are underutilized and adds an additional application to the underutilized servers. Now when an unexpected spike in user load occurs there will be insufficient compute resources to cover for it. Within each of those technologies, and the transitions between them, lies the potential for problems in end user transactions.
Steve Rosenberg
VP & GM, Dell Performance Monitoring

17. ALTERED CODE

One of the top factors impacting website response time is altered code, which doesn't trigger traditional monitoring alarms, as those are usually based on existing or known thresholds.
Mike Paqquette
VP of Security Products, Prelert

18. DISTRIBUTED DENIAL OF SERVICE ATTACKS (DDoS)

Quocirca researched the concerns Europe organizations have about the security and performance of their online domains and the action taken to mitigate these in 2014. The survey found, that by a small margin the biggest overall concern was denial of services attacks, which have now become so wide spread that they can effect just about any online resource. This is backed by other non-Quocirca surveys that show the number and scale of attacks has continually increased in the last few years. However, whilst it is the biggest attacks that hit the headlines, it is huge number of smaller, largely unreported attacks, that should be of most concern. These are launched as diversionary measures to mask other more targeted attacks or even as demos. DDoS was followed by user end point issues, poor network performance poor website server performance and DNS performance in that order.
Bob Tarzey
Analyst and Director, Quocirca

Download the free report from Quocirca.

19. INFORMATION ARCHITECTURE

The top factor that impacts website response time is the user’s experience. And that is dependent on information architecture. Just responding fast to an http request is insufficient as that would be a technical answer to a business problem. The website exists, presumably to provide information and to answer a user’s question or questions without human intervention; thus, providing information availability 24x7. Is the information on the website structured appropriately so that the user finds the information appropriate to their need instead of waiting and then getting unhelpful information? Information architecture structures the information available for each role and may be grouped by industry. In addition good information architecture structures information in the best form to answer a questions or better yet, solve a problem. Putting the right information in front of the user without excessive navigation and false starts is the best way to improve response. The information provided should build in complexity as the user’s engagement continues. We should be timing how long it takes to get helpful information to the user and not just how long a request/response took.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

20. THE IT TEAM

One factor that is often overlooked is finger pointing. Website response time can be impacted by a number of different factors, which can cause internal finger pointing as folks try to pinpoint the problem. Could be that the client’s network is slow; maybe there’s an issue with the WAN link out to the ISP; perhaps the firewall is slowing or denying traffic – you get the picture. Without an overarching performance monitoring platform to keep an eye on all of these disparate areas, that internal struggle can slow things down considerably.
Brian Promes
Director of Product Marketing, SevOne

In today's software-defined economy where every business runs on apps, the top factor that impacts performance and response time is inattention to early warning signs like increased load time for key pages, long-running database queries, or unresolved user complaints. Often, apps provide clear indications via monitoring alerts when any of these occurs but restoring and maintaining performance first requires a culture of service quality that associates uptime with customer value. Tools and metrics are useful but only if people and process are aligned to deliver exceptional user experiences. A lack of service culture often leads to early warning signs being ignored. One way to ensure that app teams value site performance is to make it easier for them to focus on solving problems that most directly impact customer experience. Too many irrelevant alerts means valuable resources spend time figuring out what problem to solve or solving the wrong problem - either of which are inefficient and demotivating.
Dan Turchin
VP Product, Big Panda

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

20 Top Factors That Impact Website Response Time - Part 4

APMdigest asked industry experts – from analysts and consultants to the top vendors – to outline the most important factors that impact website response time. The last installment of the list, featuring factors 16–20, presents various factors you may not have considered.

Start with Part 1 of "20 Top Factors That Impact Website Response Time"

Start with Part 2 of "20 Top Factors That Impact Website Response Time"

Start with Part 3 of "20 Top Factors That Impact Website Response Time"

16. IT INFRASTRUCTURE CHANGES

As new servers are powered on, database configurations changed, shared storage reconfigured, VMs reallocated, along with a whole host of other everyday infrastructure changes, little does the IT admin know how the upstream effects of these changes may be impacting web response times and the company’s bottom line. Consider this example: A storage (disk) change is made. The change slows a group of VMs. One of those VMs supports a database and therefore slows its queries. Let’s say those queries support the e-commerce application servers. As a result 60% of the users of this application experience slower responses. Or let’s say an overzealous VM admin observes that certain hosts are underutilized and adds an additional application to the underutilized servers. Now when an unexpected spike in user load occurs there will be insufficient compute resources to cover for it. Within each of those technologies, and the transitions between them, lies the potential for problems in end user transactions.
Steve Rosenberg
VP & GM, Dell Performance Monitoring

17. ALTERED CODE

One of the top factors impacting website response time is altered code, which doesn't trigger traditional monitoring alarms, as those are usually based on existing or known thresholds.
Mike Paqquette
VP of Security Products, Prelert

18. DISTRIBUTED DENIAL OF SERVICE ATTACKS (DDoS)

Quocirca researched the concerns Europe organizations have about the security and performance of their online domains and the action taken to mitigate these in 2014. The survey found, that by a small margin the biggest overall concern was denial of services attacks, which have now become so wide spread that they can effect just about any online resource. This is backed by other non-Quocirca surveys that show the number and scale of attacks has continually increased in the last few years. However, whilst it is the biggest attacks that hit the headlines, it is huge number of smaller, largely unreported attacks, that should be of most concern. These are launched as diversionary measures to mask other more targeted attacks or even as demos. DDoS was followed by user end point issues, poor network performance poor website server performance and DNS performance in that order.
Bob Tarzey
Analyst and Director, Quocirca

Download the free report from Quocirca.

19. INFORMATION ARCHITECTURE

The top factor that impacts website response time is the user’s experience. And that is dependent on information architecture. Just responding fast to an http request is insufficient as that would be a technical answer to a business problem. The website exists, presumably to provide information and to answer a user’s question or questions without human intervention; thus, providing information availability 24x7. Is the information on the website structured appropriately so that the user finds the information appropriate to their need instead of waiting and then getting unhelpful information? Information architecture structures the information available for each role and may be grouped by industry. In addition good information architecture structures information in the best form to answer a questions or better yet, solve a problem. Putting the right information in front of the user without excessive navigation and false starts is the best way to improve response. The information provided should build in complexity as the user’s engagement continues. We should be timing how long it takes to get helpful information to the user and not just how long a request/response took.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

20. THE IT TEAM

One factor that is often overlooked is finger pointing. Website response time can be impacted by a number of different factors, which can cause internal finger pointing as folks try to pinpoint the problem. Could be that the client’s network is slow; maybe there’s an issue with the WAN link out to the ISP; perhaps the firewall is slowing or denying traffic – you get the picture. Without an overarching performance monitoring platform to keep an eye on all of these disparate areas, that internal struggle can slow things down considerably.
Brian Promes
Director of Product Marketing, SevOne

In today's software-defined economy where every business runs on apps, the top factor that impacts performance and response time is inattention to early warning signs like increased load time for key pages, long-running database queries, or unresolved user complaints. Often, apps provide clear indications via monitoring alerts when any of these occurs but restoring and maintaining performance first requires a culture of service quality that associates uptime with customer value. Tools and metrics are useful but only if people and process are aligned to deliver exceptional user experiences. A lack of service culture often leads to early warning signs being ignored. One way to ensure that app teams value site performance is to make it easier for them to focus on solving problems that most directly impact customer experience. Too many irrelevant alerts means valuable resources spend time figuring out what problem to solve or solving the wrong problem - either of which are inefficient and demotivating.
Dan Turchin
VP Product, Big Panda

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...