Are Your Monitoring Systems Ready for the Cloud?
February 04, 2013
Gary Read
Share this

In a recent survey of 300 application developers, conducted by Boundary, we found that nearly 60 percent of participants had been affected by a Cloud outage. Around 72 percent of participants experienced significant costs from Cloud performance issues: thousands of dollars per incident and/or in excess of $100 per minute of downtime. This isn't stopping companies from moving to the Cloud, of course. The same survey found that 67% of developers say that their company is hosting “business-impacting” applications in the public Cloud.

Other surveys show similar concern around performance in the Cloud. The Cisco 2012 global Cloud computing survey indicated that Cloud application performance was one of the top three challenges for companies in migrating applications to the Cloud, after availability/reliability and device security.

It's easy to point the finger at the hosting companies. They're managing the infrastructure, so ultimately they must be responsible for performance, right? Not so fast.

Running services on the Internet is not foolproof. Whether due to weather, natural disasters, equipment failure and/or operator error, outages will occur. Large IaaS vendors, such as Google, Rackspace and Savvis, are operating highly interdependent, complex services based on dozens of data centers, broadband connections and thousands of servers around the world; 100 percent uptime is simply not possible. Plus, third-party providers can't see into your environment; they don't know what contingencies are playing out on your own network, third-party APIs and services that are being used or in the code that you wrote.

It's up to companies to fill in the gaps where their hosting partners will inevitably fail. And doing so requires a different type of monitoring capability than in years past. An industry luminary Michael Biddick, recently wrote about the need for a new generation of APM tools which can effectively monitor all components of the application and supporting infrastructure, including system and network performance. Next-generation APM systems must locate the underlying component causing the problem, he writes. Finally, APM systems working alone or with complementary products must suggest or take corrective action to resolve performance issues before they affect users.

This is sound advice. It's rare that one solution can accomplish all of these tasks. Most companies, including many of our customers, rely on multiple monitoring tools which work together and share information for quick identification of issues and resolution.

Importantly, these tools must be able to bring visibility across Cloud and hybrid Cloud environments. This dynamic, virtual infrastructure has proven difficult or even impossible for older legacy APM systems, designed for physical infrastructure, to manage.

As a result, systems, application and network groups often point fingers at one another, and waste time, while still not identifying which component is causing the issue.

If your company is using a legacy APM product and has invested a lot of money and time into it, you may be loathe to replace it. That's a valid consideration. It's worth talking to your vendor to determine how they can support your move to the Cloud. Will an update be coming soon to address Cloud monitoring? If not, can their product easily work with newer tools, to bridge the gap? But in general, new architectures demand new solutions.

Another trend is that APM tools are now offered as a service, just as the applications they monitor. This reduces the burden on IT to support yet another piece of software or appliance, and enables organizations to get up and running quickly on new monitoring systems as needed.

We are seeing a huge resurgence and growth in the APM market - causing a number of analysts to publish in-depth studies around market segmentation and needs. Companies want to monitor their IT infrastructures from an “application first” or top-down perspective, which is rendering traditional bottom-up tools as legacy. Something that everyone appears to agree on is that application monitoring is not a one-size-fits-all situation and customers should understand their requirements fully before selecting their partners. The good news is that with tools being offered as SaaS and on shorter-term subscription contracts, the cost of adoption and change has lowered dramatically.

We are seeing modern applications and Cloud computing drive huge growth in the new generation of solutions while traditional/legacy solutions are withering away. We are also seeing a clear distinction emerging between developer-focused solutions and operations-focused solutions, as follows:

Developer-focused solutions answer the question: “Where in the code is my problem area?” If the problem is not in the code, then of course these tools offer limited help.

Operations-focused solutions answer the question: “Where is my problem?” These tools must cover 100% of your environment but don’t go as deep in code analysis.

It’s a transitional time for the APM technology market. More than perhaps ever before, companies are realizing that to succeed in the massive change of placing IT services in the Cloud, an investment in comprehensive and always-on monitoring tools is a must. Otherwise, the Cloud can backfire. Users and managers will not quickly forget if their apps crash or sensitive data is lost forever. Selecting a next-gen APM tool today that is designed for monitoring modern, distributed, Web apps and services will help a company best prepare for a transition to the new enterprise computing environment underway right now.

ABOUT Gary Read

Gary Read, CEO and President of Boundary, previously served as CEO of Nimsoft, providers of the award-winning Cloud monitoring solution, where he grew the business from zero to over $100 million in bookings and 300 people. As CEO, Gary led all aspects of the company including product, marketing, sales, support, and finance, guiding Nimsoft to a successful acquisition by CA for $350 million. Nimsoft experienced significant worldwide growth, with approximately 1,000 customers in 36 countries. Prior to Nimsoft, Gary held executive positions at BMC Software, Riversoft, and Boole and Babbage.

Share this

The Latest

September 21, 2023

Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...

September 20, 2023

IT leaders are driving an increasing number of automation initiatives as a way to stay competitive, reduce costs and scale as they navigate an unpredictable social and economic environment, according to the 2023 State of Automation in IT survey conducted by Jitterbit ...

September 19, 2023

Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...

September 18, 2023
Digital transformation is key to ensuring companies keep up with the competitive market landscape. Putting digital at the core of a business can significantly reduce operating expenses and inefficiencies. However, this process often means changing the way internal teams work with one another. To help with the transition, this blog offers chief experience officers (CXOs) advice on how to lead a successful digital transformation project ...
September 14, 2023

Earlier this year, New Relic conducted a study on observability ... The 2023 Observability Forecast reveals observability's impact on the lives of technical professionals and businesses' bottom lines. Here are 10 key takeaways from the forecast ...

September 13, 2023
On September 10, MGM Resorts experienced what it called a "cybersecurity issue" that had a major impact on the company's systems, showing how cyberattacks can bring down applications, ultimately causing problems for a company in many ways ...
September 12, 2023

Only 33% of executives are "very confident" in their ability to operate in a public cloud environment, according to the 2023 State of CloudOps report from NetApp. This represents an increase from 2022 when only 21% reported feeling very confident ...

September 11, 2023

The majority of organizations across Australia and New Zealand (A/NZ) breached over the last year had personally identifiable information (PII) compromised, but most have not yet modified their data management policies, according to the Cybersecurity and PII Report from ManageEngine ...

September 07, 2023

A large majority of organizations employ more than one cloud automation solution, and this practice creates significant challenges that are resulting in delays and added costs for businesses, according to Why companies lose efficiency and compliance with cloud automation solutions from Broadcom ...

September 06, 2023

Companies have historically relied on tools that warn IT teams when their digital systems are experiencing glitches or attacks. But in an age where consumer loyalty is fickle and hybrid workers' Digital Employee Experience (DEX) is paramount for productivity, companies cannot afford to retroactively deal with IT failures that slow down employee productivity ...