Beyond Performance Monitoring
August 09, 2010
Russell Rothstein
Share this

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

Share this

The Latest

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...

September 21, 2022

US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...

September 20, 2022

Users today expect a seamless, uninterrupted experience when interacting with their web and mobile apps. Their expectations have continued to grow in tandem with their appetite for new features and consistent updates. Mobile apps have responded by increasing their release cadence by up to 40%, releasing a new full version of their app every 4-5 days, as determined in this year's SmartBear State of Software Quality | Application Stability Index report ...

September 19, 2022

In this second part of the blog series, we look at how adopting AIOps capabilities can drive business value for an organization ...

September 16, 2022

ITOPS and DevOps is in the midst of a surge of innovation. New devices and new systems are appearing at an unprecedented rate. There are many drivers of this phenomenon, from virtualization and containerization of applications and services to the need for improved security and the proliferation of 5G and IOT devices. The interconnectedness and the interdependencies of these technologies also greatly increase systems complexity and therefore increase the sheer volume of things that need to be integrated, monitored, and maintained ...

September 15, 2022

IT talent acquisition challenges are now heavily influencing technology investment decisions, according to new research from Salesforce's MuleSoft. The 2022 IT Leaders Pulse Report reveals that almost three quarters (73%) of senior IT leaders agree that acquiring IT talent has never been harder, and nearly all (98%) respondents say attracting IT talent influences their organization's technology investment choices ...

September 14, 2022

The findings of the 2022 Observability Forecast offer a detailed view of how this practice is shaping engineering and the technologies of the future. Here are 10 key takeaways from the forecast ...

September 13, 2022

Data professionals are spending 40% of their time evaluating or checking data quality and that poor data quality impacts 26% of their companies' revenue, according to The State of Data Quality 2022, a report commissioned by Monte Carlo and conducted by Wakefield Research ...

September 12, 2022
Statistically speaking, every year, slow-loading websites cost a staggering $2.6 billion in losses to their owners. Also, about 53% of the website visitors on mobile are likely to abandon the site if it takes more than 3 seconds to load. This is the reason why performance testing should be conducted rigorously on a website or web application in the SDLC before deployment ...