APMdigest asked industry experts – from analysts and consultants to the top vendors – to outline the most important factors that impact website response time. The last installment of the list, featuring factors 16–20, presents various factors you may not have considered.
16. IT INFRASTRUCTURE CHANGES
As new servers are powered on, database configurations changed, shared storage reconfigured, VMs reallocated, along with a whole host of other everyday infrastructure changes, little does the IT admin know how the upstream effects of these changes may be impacting web response times and the company’s bottom line. Consider this example: A storage (disk) change is made. The change slows a group of VMs. One of those VMs supports a database and therefore slows its queries. Let’s say those queries support the e-commerce application servers. As a result 60% of the users of this application experience slower responses. Or let’s say an overzealous VM admin observes that certain hosts are underutilized and adds an additional application to the underutilized servers. Now when an unexpected spike in user load occurs there will be insufficient compute resources to cover for it. Within each of those technologies, and the transitions between them, lies the potential for problems in end user transactions.
VP & GM, Dell Performance Monitoring
17. ALTERED CODE
One of the top factors impacting website response time is altered code, which doesn't trigger traditional monitoring alarms, as those are usually based on existing or known thresholds.
VP of Security Products, Prelert
18. DISTRIBUTED DENIAL OF SERVICE ATTACKS (DDoS)
Quocirca researched the concerns Europe organizations have about the security and performance of their online domains and the action taken to mitigate these in 2014. The survey found, that by a small margin the biggest overall concern was denial of services attacks, which have now become so wide spread that they can effect just about any online resource. This is backed by other non-Quocirca surveys that show the number and scale of attacks has continually increased in the last few years. However, whilst it is the biggest attacks that hit the headlines, it is huge number of smaller, largely unreported attacks, that should be of most concern. These are launched as diversionary measures to mask other more targeted attacks or even as demos. DDoS was followed by user end point issues, poor network performance poor website server performance and DNS performance in that order.
Analyst and Director, Quocirca
19. INFORMATION ARCHITECTURE
The top factor that impacts website response time is the user’s experience. And that is dependent on information architecture. Just responding fast to an http request is insufficient as that would be a technical answer to a business problem. The website exists, presumably to provide information and to answer a user’s question or questions without human intervention; thus, providing information availability 24x7. Is the information on the website structured appropriately so that the user finds the information appropriate to their need instead of waiting and then getting unhelpful information? Information architecture structures the information available for each role and may be grouped by industry. In addition good information architecture structures information in the best form to answer a questions or better yet, solve a problem. Putting the right information in front of the user without excessive navigation and false starts is the best way to improve response. The information provided should build in complexity as the user’s engagement continues. We should be timing how long it takes to get helpful information to the user and not just how long a request/response took.
VP Product Management and Marketing, Nastel Technologies
20. THE IT TEAM
One factor that is often overlooked is finger pointing. Website response time can be impacted by a number of different factors, which can cause internal finger pointing as folks try to pinpoint the problem. Could be that the client’s network is slow; maybe there’s an issue with the WAN link out to the ISP; perhaps the firewall is slowing or denying traffic – you get the picture. Without an overarching performance monitoring platform to keep an eye on all of these disparate areas, that internal struggle can slow things down considerably.
Director of Product Marketing, SevOne
In today's software-defined economy where every business runs on apps, the top factor that impacts performance and response time is inattention to early warning signs like increased load time for key pages, long-running database queries, or unresolved user complaints. Often, apps provide clear indications via monitoring alerts when any of these occurs but restoring and maintaining performance first requires a culture of service quality that associates uptime with customer value. Tools and metrics are useful but only if people and process are aligned to deliver exceptional user experiences. A lack of service culture often leads to early warning signs being ignored. One way to ensure that app teams value site performance is to make it easier for them to focus on solving problems that most directly impact customer experience. Too many irrelevant alerts means valuable resources spend time figuring out what problem to solve or solving the wrong problem - either of which are inefficient and demotivating.
VP Product, Big Panda
According to most industry perceptions, application performance management (APM) and application portfolio management (APM) might seem to be worlds apart — or at best connected by a very thin thread. In this blog, I'd like to highlight three areas that are bridging the APM-to-APM divide: digital experience management, application discovery and dependency mapping (ADDM), and agile/DevOps lifecycle planning ...
In today's digital world, it is possible to gauge the cost implications of an IT outage on employee productivity, revenue generation but it is usually much more tricky to measure the negative impacts on the very IT people's lives ...
APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 5 offers some interesting final thoughts ...
APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 4 covers automation and the dynamic IT environment ...
APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 3 covers monitoring and user experience ...
APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 2 covers visibility and data ...
Managing application performance today requires analytics. IT Operations Analytics (ITOA) is often used to augment or built into Application Performance Management solutions to process the massive amounts of metrics coming out of today's IT environment. But today ITOA stands at a crossroads as revolutionary technologies and capabilities are emerging to push it into new realms. So where is ITOA going next? With this question in mind, APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA ...
Digital transformation initiatives are more successful when they have buy-in from across the business, according to a new report titled Digital Transformation Trailblazing: A Data-Driven Approach ...
The growing market for analytics in IT is one of the more exciting areas to watch in the technology industry. Exciting because of the variety and types of vendor innovation in this area. And exciting as well because our research indicates the adoption of advanced IT analytics supports data sharing and joint decision making in a way that's catalytic for both IT and digital transformation ...
Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and the challenges and recommendations for AIOps adoption ...