Why Cloud Consumers Need “Objective” Application Performance Management
July 12, 2013

Jim Young
IBM

Share this

The long anticipated rise of cloud computing is finally taking hold, with analysts reporting more investment in public clouds than private clouds, and suggesting that half of all production applications will be running on public clouds in three or four years.

The allure of public clouds springs from advantages like improved service scalability, reduced operational costs, and an increased focus on business goals and strategies instead of the technology needed to pursue them. However, there is a cost to that flexibility and economy, in reduced visibility of application and infrastructure health. Without direct control over the cloud infrastructure itself, traditional application performance management (APM) tools may prove impractical to deploy and manage.

I recently read a story about a war of words between a leading platform as a service vendor and a disgruntled customer, who discovered that they weren’t actually getting the amount of virtual computing capacity that they had been told they were getting.

Putting aside the customer’s justifiable indignation at not getting the resources that they believed they were paying for, the real story for a cloud consumer here (or an APM Product Manager) is that the tools they were using to monitor their workloads didn’t really provide them with a complete story. Then, when the continued mystery warranted a deeper-dive tool, it appears that they were pressured or influenced into purchasing a particular cloud APM tool because of a relationship between that tool vendor and the PaaS provider.

This suggests (and logic supports) that customers are better off using objective APM tools when monitoring workloads on public clouds, whether those workloads are running on a Platform as a Service (PaaS) solution like Heroku, or an Infrastructure as a Service (IaaS) solution like Amazon or Rackspace.

We generally espouse such a practice to help a customer maintain a posture of portability, so they can nimbly move workloads around to different cloud platforms, yet maintain continuity in their real-time and historical view of application health, without having to train their eyes on a new health dashboard whenever they move their workloads. We can employ the slightly suspicious sounding argument that a customer should not necessarily rely on his service provider for monitoring tools, since that provider has a vested interest in painting a rosy picture. Even in the presence of SLAs, a cloud tenant with no access to the infrastructure is somewhat at the mercy of his provider for performance reporting. An APM solution that the customer can deploy and configure himself provides a level of “checks and balances” oversight.

It can be impractical for customers to deploy legacy monitoring tools when moving to public clouds, so there is a need for a solution that can be deployed within those public clouds, in their own little sphere of control where their application VMs reside. By adopting an elastic and scalable ­yet small and easy to deploy architecture, as well as the ability to embed additional monitoring technology into base VM images, this solution enables robust APM, even when users can only deploy simple Linux VMs to someone else's cloud.

Jim Young is Information Development Manager, IBM Cloud and Smarter Infrastructure

Related Links:

www.ibm.com

Information Development Manager, IBM Cloud and Smarter Infrastructure
Share this

The Latest

July 18, 2019

Organizations that are working with artificial intelligence (AI) or machine learning (ML) have, on average, four AI/ML projects in place, according to a recent survey by Gartner, Inc. Of all respondents, 59% said they have AI deployed today ...

July 17, 2019

The 11th anniversary of the Apple App Store frames a momentous time period in how we interact with each other and the services upon which we have come to rely. Even so, we continue to have our in-app mobile experiences marred by poor performance and instability. Apple has done little to help, and other tools provide little to no visibility and benchmarks on which to prioritize our efforts outside of crashes ...

July 16, 2019

Confidence in artificial intelligence (AI) and its ability to enhance network operations is high, but only if the issue of bias is tackled. Service providers (68%) are most concerned about the bias impact of "bad or incomplete data sets," since effective AI requires clean, high quality, unbiased data, according to a new survey of communication service providers ...

July 15, 2019

Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability. However, like most networking equipment it's critical that installation and configuration are done properly ...

July 11, 2019

The Democratic presidential debates are likely to have many people switching back-and-forth between live streams over the coming months. This is going to be especially true in the days before and after each debate, which will mean many office networks are likely to see a greater share of their total capacity going to streaming news services than ever before ...

July 10, 2019

Monitoring of heating, ventilation and air conditioning (HVAC) infrastructures has become a key concern over the last several years. Modern versions of these systems need continual monitoring to stay energy efficient and deliver satisfactory comfort to building occupants. This is because there are a large number of environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs for this type of infrastructure ...

July 09, 2019

Shoppers won’t wait for retailers, according to a new research report titled, 2019 Retailer Website Performance Evaluation: Are Retail Websites Meeting Shopper Expectations? from Yottaa ...

June 27, 2019

Customer satisfaction and retention were the top concerns for a majority (58%) of IT leaders when suffering downtime or outages, according to a survey of top IT leaders conducted by AIOps Exchange. The effect of service interruptions on customers outweighed other concerns such as loss of revenue, brand reputation, negative press coverage, or the impact on IT Ops teams.

June 26, 2019

It is inevitable that employee productivity and the quality of customer experiences suffer as a consequence of the poor performance of O365. The quick detection and rapid resolution of problems associated with O365 are top of mind for any organization to keep its business humming ...

June 25, 2019

Employees at British businesses rate computer downtime as the most significant irritant at their current workplace (41 percent) when asked to pick their top three ...