Cloud computing represents a compelling way for IT teams to achieve superior agility, flexibility and cost-efficiency in delivering both customer- and employee-facing enterprise applications. But just because you’re using cloud services from one of the top service providers, that’s no guarantee of superior application performance, particularly when it comes to speed. Businesses must look beyond cloud deployment benefits and evaluate how moving web applications to the cloud may impact their end users’ experiences.
Web application speed is a business issue, and applications that don’t perform well -– are slow to load, have periods of unavailability or inconsistent performance –- can negatively impact end-users’ experiences. Consider potential customers –- when their satisfaction with your application is low, this reduces the likelihood that they will continue to spend time on your site and/or actually go through with a purchase.
A recent study analyzing millions of page views on websites around the world found that conversion rates increase 74 percent when page load time improves from eight to two seconds. Another study found that page abandonment rates increase steeply as page load times increase.
With statistics like this, you can’t afford to simply turn over your mission-critical applications to the cloud and not take steps on your own to validate and ensure strong application performance. Today, most cloud service offer generic guarantees such as 99.95 percent uptime, but all this means is that their services are up and running -- not that your application is performing optimally and delivering the performance that your end users expect.
Many service providers will issue service credits for blatant performance violations, but can these credits make up for the potential damage caused to your revenue, brand and customer satisfaction? Contrary to popular belief, cloud elasticity is not without limits and if your “neighbor” in the cloud experiences a spike in traffic, there’s a chance your application may slow way down.
Cloud service providers should provide application performance guarantees tailored to individual customers’ needs and provide proactive SLA notifications, but the reality is that many do not. It’s therefore incumbent upon cloud users to measure the performance of their cloud-based applications on their own, from the only perspective that matters –- that of their end users, on the other side of the cloud at the edge of the Internet.
Likewise, ramp-up time of additional capacity during peak business demands might be fundamental to your cloud goals and therefore should be proactively tested. This is the only way to know for sure that performance is not slacking and that you’re getting what you’re paying for. You should also insist that specific application performance guarantees be written into your SLA.
Cloud-based application performance can vary greatly depending on an end user’s location. Typically, the closer an end user is to a cloud service provider data center, the better the performance. So you must be extremely watchful of the end-user experience across key geographies, at critical times of day. Worldwide monitoring and testing networks can give you a quick and easy bird’s eye view into the actual experience of end-user segments across various regions.
Furthermore, new online communities measure and monitor the performance of the leading cloud service providers, helping you understand if an application problem is unique to you, or symptomatic of a larger cloud-related issue that may be affecting the wider Internet ecosystem.
In fairness to cloud service providers, it can be challenging to guarantee the performance of an application from an end user’s perspective because this performance is so dependent on a number of factors which are completely outside their control -– regional ISPs, local ISPs third-party content and services, and CDNs, and all the way to end users’ browsers and devices. This is known as the application delivery chain, and one single poorly performing element –- be it the cloud or another variable – can bring down performance for an entire application. Managing application performance across this delivery chain begins by understanding the end-user experience at the browser/device level, and then extending all the way back to the data center to identify and address any “offending” elements along the way.
As more applications and application components are ported to shared and opaque cloud platforms, it becomes essential to include the cloud as part of this comprehensive view to reap its benefits.
Steve Tack is CTO of Compuware’s Application Performance Management Business Unit.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...