Skip to main content

20 Top Factors That Impact Website Response Time - Part 4

APMdigest asked industry experts – from analysts and consultants to the top vendors – to outline the most important factors that impact website response time. The last installment of the list, featuring factors 16–20, presents various factors you may not have considered.

Start with Part 1 of "20 Top Factors That Impact Website Response Time"

Start with Part 2 of "20 Top Factors That Impact Website Response Time"

Start with Part 3 of "20 Top Factors That Impact Website Response Time"

16. IT INFRASTRUCTURE CHANGES

As new servers are powered on, database configurations changed, shared storage reconfigured, VMs reallocated, along with a whole host of other everyday infrastructure changes, little does the IT admin know how the upstream effects of these changes may be impacting web response times and the company’s bottom line. Consider this example: A storage (disk) change is made. The change slows a group of VMs. One of those VMs supports a database and therefore slows its queries. Let’s say those queries support the e-commerce application servers. As a result 60% of the users of this application experience slower responses. Or let’s say an overzealous VM admin observes that certain hosts are underutilized and adds an additional application to the underutilized servers. Now when an unexpected spike in user load occurs there will be insufficient compute resources to cover for it. Within each of those technologies, and the transitions between them, lies the potential for problems in end user transactions.
Steve Rosenberg
VP & GM, Dell Performance Monitoring

17. ALTERED CODE

One of the top factors impacting website response time is altered code, which doesn't trigger traditional monitoring alarms, as those are usually based on existing or known thresholds.
Mike Paqquette
VP of Security Products, Prelert

18. DISTRIBUTED DENIAL OF SERVICE ATTACKS (DDoS)

Quocirca researched the concerns Europe organizations have about the security and performance of their online domains and the action taken to mitigate these in 2014. The survey found, that by a small margin the biggest overall concern was denial of services attacks, which have now become so wide spread that they can effect just about any online resource. This is backed by other non-Quocirca surveys that show the number and scale of attacks has continually increased in the last few years. However, whilst it is the biggest attacks that hit the headlines, it is huge number of smaller, largely unreported attacks, that should be of most concern. These are launched as diversionary measures to mask other more targeted attacks or even as demos. DDoS was followed by user end point issues, poor network performance poor website server performance and DNS performance in that order.
Bob Tarzey
Analyst and Director, Quocirca

Download the free report from Quocirca.

19. INFORMATION ARCHITECTURE

The top factor that impacts website response time is the user’s experience. And that is dependent on information architecture. Just responding fast to an http request is insufficient as that would be a technical answer to a business problem. The website exists, presumably to provide information and to answer a user’s question or questions without human intervention; thus, providing information availability 24x7. Is the information on the website structured appropriately so that the user finds the information appropriate to their need instead of waiting and then getting unhelpful information? Information architecture structures the information available for each role and may be grouped by industry. In addition good information architecture structures information in the best form to answer a questions or better yet, solve a problem. Putting the right information in front of the user without excessive navigation and false starts is the best way to improve response. The information provided should build in complexity as the user’s engagement continues. We should be timing how long it takes to get helpful information to the user and not just how long a request/response took.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

20. THE IT TEAM

One factor that is often overlooked is finger pointing. Website response time can be impacted by a number of different factors, which can cause internal finger pointing as folks try to pinpoint the problem. Could be that the client’s network is slow; maybe there’s an issue with the WAN link out to the ISP; perhaps the firewall is slowing or denying traffic – you get the picture. Without an overarching performance monitoring platform to keep an eye on all of these disparate areas, that internal struggle can slow things down considerably.
Brian Promes
Director of Product Marketing, SevOne

In today's software-defined economy where every business runs on apps, the top factor that impacts performance and response time is inattention to early warning signs like increased load time for key pages, long-running database queries, or unresolved user complaints. Often, apps provide clear indications via monitoring alerts when any of these occurs but restoring and maintaining performance first requires a culture of service quality that associates uptime with customer value. Tools and metrics are useful but only if people and process are aligned to deliver exceptional user experiences. A lack of service culture often leads to early warning signs being ignored. One way to ensure that app teams value site performance is to make it easier for them to focus on solving problems that most directly impact customer experience. Too many irrelevant alerts means valuable resources spend time figuring out what problem to solve or solving the wrong problem - either of which are inefficient and demotivating.
Dan Turchin
VP Product, Big Panda

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

20 Top Factors That Impact Website Response Time - Part 4

APMdigest asked industry experts – from analysts and consultants to the top vendors – to outline the most important factors that impact website response time. The last installment of the list, featuring factors 16–20, presents various factors you may not have considered.

Start with Part 1 of "20 Top Factors That Impact Website Response Time"

Start with Part 2 of "20 Top Factors That Impact Website Response Time"

Start with Part 3 of "20 Top Factors That Impact Website Response Time"

16. IT INFRASTRUCTURE CHANGES

As new servers are powered on, database configurations changed, shared storage reconfigured, VMs reallocated, along with a whole host of other everyday infrastructure changes, little does the IT admin know how the upstream effects of these changes may be impacting web response times and the company’s bottom line. Consider this example: A storage (disk) change is made. The change slows a group of VMs. One of those VMs supports a database and therefore slows its queries. Let’s say those queries support the e-commerce application servers. As a result 60% of the users of this application experience slower responses. Or let’s say an overzealous VM admin observes that certain hosts are underutilized and adds an additional application to the underutilized servers. Now when an unexpected spike in user load occurs there will be insufficient compute resources to cover for it. Within each of those technologies, and the transitions between them, lies the potential for problems in end user transactions.
Steve Rosenberg
VP & GM, Dell Performance Monitoring

17. ALTERED CODE

One of the top factors impacting website response time is altered code, which doesn't trigger traditional monitoring alarms, as those are usually based on existing or known thresholds.
Mike Paqquette
VP of Security Products, Prelert

18. DISTRIBUTED DENIAL OF SERVICE ATTACKS (DDoS)

Quocirca researched the concerns Europe organizations have about the security and performance of their online domains and the action taken to mitigate these in 2014. The survey found, that by a small margin the biggest overall concern was denial of services attacks, which have now become so wide spread that they can effect just about any online resource. This is backed by other non-Quocirca surveys that show the number and scale of attacks has continually increased in the last few years. However, whilst it is the biggest attacks that hit the headlines, it is huge number of smaller, largely unreported attacks, that should be of most concern. These are launched as diversionary measures to mask other more targeted attacks or even as demos. DDoS was followed by user end point issues, poor network performance poor website server performance and DNS performance in that order.
Bob Tarzey
Analyst and Director, Quocirca

Download the free report from Quocirca.

19. INFORMATION ARCHITECTURE

The top factor that impacts website response time is the user’s experience. And that is dependent on information architecture. Just responding fast to an http request is insufficient as that would be a technical answer to a business problem. The website exists, presumably to provide information and to answer a user’s question or questions without human intervention; thus, providing information availability 24x7. Is the information on the website structured appropriately so that the user finds the information appropriate to their need instead of waiting and then getting unhelpful information? Information architecture structures the information available for each role and may be grouped by industry. In addition good information architecture structures information in the best form to answer a questions or better yet, solve a problem. Putting the right information in front of the user without excessive navigation and false starts is the best way to improve response. The information provided should build in complexity as the user’s engagement continues. We should be timing how long it takes to get helpful information to the user and not just how long a request/response took.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

20. THE IT TEAM

One factor that is often overlooked is finger pointing. Website response time can be impacted by a number of different factors, which can cause internal finger pointing as folks try to pinpoint the problem. Could be that the client’s network is slow; maybe there’s an issue with the WAN link out to the ISP; perhaps the firewall is slowing or denying traffic – you get the picture. Without an overarching performance monitoring platform to keep an eye on all of these disparate areas, that internal struggle can slow things down considerably.
Brian Promes
Director of Product Marketing, SevOne

In today's software-defined economy where every business runs on apps, the top factor that impacts performance and response time is inattention to early warning signs like increased load time for key pages, long-running database queries, or unresolved user complaints. Often, apps provide clear indications via monitoring alerts when any of these occurs but restoring and maintaining performance first requires a culture of service quality that associates uptime with customer value. Tools and metrics are useful but only if people and process are aligned to deliver exceptional user experiences. A lack of service culture often leads to early warning signs being ignored. One way to ensure that app teams value site performance is to make it easier for them to focus on solving problems that most directly impact customer experience. Too many irrelevant alerts means valuable resources spend time figuring out what problem to solve or solving the wrong problem - either of which are inefficient and demotivating.
Dan Turchin
VP Product, Big Panda

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...