Beyond Performance Monitoring
August 09, 2010
Russell Rothstein
Share this

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

Share this

The Latest

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...