Whether you are consuming Cloud-based applications, in the middle of a Cloud deployment, hosting your own applications in the Cloud or offering publicly accessible applications via the Internet, Cloud computing has transformed the way web content is delivered to your customers and internal users. Application components are delivered now from your datacenter, private Clouds and from third-party services hosted in the public Cloud, outside your control zone.
Unfortunately, your customers will hold IT ultimately responsible for their user experience, regardless of where problems occur. Without the capability to manage the performance lifecycle between the Cloud – whether public, private or hybrid – and the consumers of business critical applications and services, it is not possible to understand (let alone guarantee) necessary application service levels from the user’s perspective. This presents many challenges. Let’s review in more detail.
1. End User Expectations are Higher than Ever Before
Today, end user satisfaction is everything. Your application must perform or else your customers will either flock to your competition or saturate your internal IT helpdesk systems with open issues requests.
Remember, loyalty does not exist on the Internet. The average online shopper expects your web pages to load in two seconds or fewer; after three seconds, up to 40% will abandon your site and go somewhere else.
And it’s not only e-retailers who should be concerned about web page load times. After e-mail or instant messaging, the second most popular online activity is searching for product or service information. Therefore, your website is a key source of leads and prospects looking for more information. If your website doesn’t perform optimally, prospects will turn to your competitors and you could lose revenue.
Similarly, expectations among your internal users have also risen. Therefore, mission-critical internal applications like corporate email, VoIP, Salesforce or EHR apps must perform as close as possible to this new two-second acceptance threshold as well.
One of the best ways to ensure quality user experiences is detecting problems before your end users are even aware that there is a problem. You should put a plan in place to proactively monitor your most critical web transactions on a 24x7 basis, using the same network circuits and from the same remote sites and locations that your end users do (i.e. shopping carts, CRM record retrieval, etc.). That way, you can identify and resolve problems before the first call to your helpdesk happens.
Besides overseeing end-to-end transaction response time, pay attention to the performance of each object within a page to ensure that images, videos or third-party objects embedded in your web apps (i.e. Google Analytics, Marketo, Facebook or Twitter feeds, etc.) are not slowing down your site and negatively impacting user experience.
2. The Cloud = Remote Applications, Distributed Locations
Any business application running “in the Cloud” is, by definition, a remote application. Thus, every office becomes a remote office, and every consumer of Cloud services becomes a remote user. Cloud services are further tied to network performance because of the nature of TCP/IP, which favors reliability over efficiency. Even the most thoroughly modern Software-as-a-Service (SaaS) applications quickly begin to fail, crash and disconnect users when network performance falls below tolerable thresholds. And remember, applications not originally designed for remote access are more susceptible to network performance fluctuations.
Therefore, you should continuously oversee key network performance indicators - capacity, utilization, data loss, jitter, route analysis and Quality of Service (QoS) configuration - across all network paths that deliver your key applications and services. You should also monitor site-to-site connectivity, connectivity with remote locations and third-party services providers SLAs.
3. The Rise of Global Cloud IP Traffic Could be Straining Your Network
Consumer and business traffic flowing through your datacenters can be categorized into three main areas:
- Traffic that remains within the data center
- Traffic that flows from data center to data center
- Traffic that flows from the data center to end users through the Internet or IP WAN
Network bandwidth, latency, packet loss, QoS and jitter can all fluctuate significantly based on the volume and type of traffic on your network. An increased volume of Cloud IP traffic (either employee recreational, employee business-related or external customers accessing your public apps) can negatively impact network performance and user experience.
- Employee recreational Internet traffic puts a strain on your network and impacts mission-critical application performance.
- You may not have enough network capacity to handle new voice/video application load due to the proliferation of personal smartphones, laptops, eBook readers and other mobile devices on corporate networks streaming audio and video.
- Business growth, while clearly a positive trend, can be risky business if you haven’t done your planning homework and properly assessed your network’s ability to support new application traffic demands prior to a major launch.
- Bidirectional Cloud IP traffic (between your datacenter and the Internet) can significantly worsens an oversubscribed WAN link bottleneck.
Therefore, every time you are planning changes in your environment (i.e. private or public Clouds initiatives, new application roll-outs, new marketing campaigns driving additional visitors to your website, etc.) you should plan a thorough network assessment to baseline the overall health of your network, and ensure enough network capacity to handle additional application demands. Otherwise you will be exposing your organization to major risk.
4. Finding the Root Cause of Problems in a New Cloud-Based Application Stack is Only Possible with a Holistic Performance Management Approach
In Cloud-based architectures, web application content delivered to your users traverses a complex path of interrelated entities inside and outside of your control zone – datacenters, private and public Clouds, remote sites and locations, third-party services and components – any of which are potential bottlenecks or failure points.
Problems can happen anywhere: in the source code of an application; in a specific method’s call to a back-end database; in a third-party hosted application or service; in a saturated network; in a datacenter-to-datacenter intermittent connection or in an oversubscribed WAN link.
The only way to quickly identify the root cause of a performance problem impacting end user experience is by holistically managing application performance end-to-end, starting from your users, and covering the entire Cloud-based application stack - network performance, cross-datacenter and remote sites connectivity, third-party SLAs and application traffic - down to network packets and application code.
5. Web Performance Optimization is Vital for your Business Success
The reality is that whether or not an application is fast depends on your end user’s perception. As a Cloud-services consumer, or as a Cloud services provider, you have to continue to look ahead for new ways to optimize application performance as user expectations for fast application delivery continue to rise.
Your competitors are working hard to optimize their web performance as a way to stay ahead of the race, and so should you. Companies that do not embrace Cloud delivery models and application performance optimization initiatives together as an ongoing “stay ahead of your competition” strategy will find it increasingly difficult to compete in a virtualized, Cloud-based world. Your approach should be built on a continuous four-stage approach – assess and monitor, optimize and validate – involving all stakeholders in your organizations.
In addition, web performance optimization plans need to be based on performance metrics gathered across the entire Cloud-based application stack – datacenters, private and public Cloud, remote locations and users – using three techniques:
Network Capacity and Performance: Underperforming networks can significantly impact the performance of your mission-critical applications (i.e. VoIP, Video conferencing, VDI, web sites, shopping carts, Salesforce, etc.). Therefore, a web performance optimization plan needs to analyze network capacity and utilization, data loss, jitter, route analysis and Quality of Service (QoS) across all network paths that deliver your key applications and services.
Network Usage and Application Traffic Analysis: Every application comes with bandwidth requirements. When working on a performance optimization plan, you need to know the requirements and behavior of the other applications that are already dependent on your network. Network traffic monitoring capabilities will give you the insight needed to ensure proper bandwidth allocation across your applications, readjust your QoS and network policies as needed and to assure optimal application performance.
Real-Time Application Tracing: Web performance optimization initiatives need to analyze real end users’ transactions and performance data across layers and hosts. This way, you identify the root cause of an application performance bottleneck (i.e. a failing SQL query, an overloaded server, or a poorly performing method, etc.) to fix it.
ABOUT Jim Melvin
Jim Melvin is President and CEO at AppNeta, a leading SaaS provider of performance management solutions for business critical applications. Directly prior to AppNeta, Melvin was VP of Global Marketing at RSA, the Security Division of EMC. He also acted as EVP of Marketing and Business Development at Network Intelligence (acquired by EMC); President and President and CEO at Mazu Networks; and had a leadership role at Cisco after the acquisition of SightPath in 2000. Melvin has been awarded three US patents in fault tolerant systems design and holds a B.S. in computer engineering and an M.S. in management from Worcester Polytechnic Institute.
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...