Beyond Performance Monitoring
August 09, 2010
Russell Rothstein
Share this

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...