At the Interop conference in April 2014, Riverbed conducted a short survey to determine whether and how application performance problems might affect an organization’s business. 210 respondents answered questions about the performance of business-critical applications, non-critical applications, and productivity applications.
We asked participants to consider their experiences at main offices, branch offices, and remote situations and evaluate how each of the following contributed to performance problems:
■ branch office infrastructure issues
■ insufficient bandwidth
■ poor application coding techniques
■ slow servers
■ too much latency in the network
We asked participants to indicate how far along they might be on projects to mitigate performance problems and to rate the effectiveness of several techniques including:
■ add more bandwidth
■ build a branch-converged infrastructure
■ distribute workloads geographically
■ deploy faster endpoints
■ deploy faster servers
■ implement application delivery controllers
■ implement performance monitoring
■ implement WAN optimization
■ rewrite applications
80% of respondents indicated that slow business-critical applications negatively affect business performance. 71% indicated that slow access to productivity applications negatively affect business performance. The top three causes of performance problems were insufficient bandwidth, too much latency, and slow servers. From this, we can observe that modern business has come to rely on highly available, high quality connectivity, and the sense that applications and data behave as if they’re local. Individuals can no longer work in isolation, disconnected from their peers. Nor can they waste time waiting for the computer to “catch up.”
Turning to mitigation techniques, we can see a curious gap emerge. The three top-rated techniques were adding bandwidth at 70%, implementing WAN optimization at 67%, and distributing workloads geographically at 52%. In all cases, however, fewer respondents indicated that they were engaged in related projects. Only 50% have added bandwidth, only 42% have implemented WAN optimization, and only 28% have distributed workloads geographically.
It isn’t all that unusual, really, for action to lag awareness. It is interesting to consider the reasons why, though. Discovering the root causes of performance problems can be challenging at times. Users often blame only one aspect: “Hey, what’s wrong with the network? Why is it always soooo sloooow?” This is a common reaction even if all except one or two applications are performing acceptably. In reality, performance problems could exist anywhere in the technology stack — the network, the application, the database, or the “glue” layers holding everything together.
We recommend four simple yet critical steps to help avoid unnecessary slowness, to help keep applications performing at their peak, and to help maintain a consistent end-user experience.
1. Analyze, diagnose, and resolve performance problems first
Monitoring tools can identify chatty applications, slow servers, congested networks, and other kinds of resource exhaustion. An end-to-end view provides the most visibility. Monitoring entire transactions, rather than just particular points, can reveal true causes of performance problems.
2. Remember that electrons and photons have a speed limit
And that limit is 186,282 miles per second (only under perfect conditions, naturally). Increasing the distance between users and data can negatively affect performance. It takes time for data to scoot across a continent or even a city.
3. Distribute workloads geographically when it makes good business sense
A “follow the sun” model can be a useful guide. Deploy application delivery controllers to increase availability and connect users to the data that’s closest. Take advantage of global load balancing capabilities to route requests as locally as possible, while also providing planet-wide resiliency against failures.
4. Address latency, the primary cause of poor WAN performance
Deploy WAN optimizers to reduce the amount of data traversing the WAN and to reduce the number of connections between clients and servers. These techniques minimize the effects of latency, and — almost — make it seem as if data moves faster than light.
Steve Riley is Deputy CTO for Riverbed Technology.
Organizations that are working with artificial intelligence (AI) or machine learning (ML) have, on average, four AI/ML projects in place, according to a recent survey by Gartner, Inc. Of all respondents, 59% said they have AI deployed today ...
The 11th anniversary of the Apple App Store frames a momentous time period in how we interact with each other and the services upon which we have come to rely. Even so, we continue to have our in-app mobile experiences marred by poor performance and instability. Apple has done little to help, and other tools provide little to no visibility and benchmarks on which to prioritize our efforts outside of crashes ...
Confidence in artificial intelligence (AI) and its ability to enhance network operations is high, but only if the issue of bias is tackled. Service providers (68%) are most concerned about the bias impact of "bad or incomplete data sets," since effective AI requires clean, high quality, unbiased data, according to a new survey of communication service providers ...
Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability. However, like most networking equipment it's critical that installation and configuration are done properly ...
The Democratic presidential debates are likely to have many people switching back-and-forth between live streams over the coming months. This is going to be especially true in the days before and after each debate, which will mean many office networks are likely to see a greater share of their total capacity going to streaming news services than ever before ...
Monitoring of heating, ventilation and air conditioning (HVAC) infrastructures has become a key concern over the last several years. Modern versions of these systems need continual monitoring to stay energy efficient and deliver satisfactory comfort to building occupants. This is because there are a large number of environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs for this type of infrastructure ...
Shoppers won’t wait for retailers, according to a new research report titled, 2019 Retailer Website Performance Evaluation: Are Retail Websites Meeting Shopper Expectations? from Yottaa ...
Customer satisfaction and retention were the top concerns for a majority (58%) of IT leaders when suffering downtime or outages, according to a survey of top IT leaders conducted by AIOps Exchange. The effect of service interruptions on customers outweighed other concerns such as loss of revenue, brand reputation, negative press coverage, or the impact on IT Ops teams.
It is inevitable that employee productivity and the quality of customer experiences suffer as a consequence of the poor performance of O365. The quick detection and rapid resolution of problems associated with O365 are top of mind for any organization to keep its business humming ...
Employees at British businesses rate computer downtime as the most significant irritant at their current workplace (41 percent) when asked to pick their top three ...