Beyond Performance Monitoring
August 09, 2010
Russell Rothstein
Share this

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

Share this

The Latest

March 01, 2024

As organizations continue to navigate the complexities of the digital era, which has been marked by exponential advancements in AI and technology, the strategic deployment of modern, practical applications has become indispensable for sustaining competitive advantage and realizing business goals. The Info-Tech Research Group report, Applications Priorities 2024, explores the following five initiatives for emerging and leading-edge technologies and practices that can enable IT and applications leaders to optimize their application portfolio and improve on capabilities needed to meet the ambitions of their organizations ...

February 29, 2024

Despite the growth in popularity of artificial intelligence (AI) and ML across a number of industries, there is still a huge amount of unrealized potential, with many businesses playing catch-up and still planning how ML solutions can best facilitate processes. Further progression could be limited without investment in specialized technical teams to drive development and integration ...

February 28, 2024

With over 200 streaming services to choose from, including multiple platforms featuring similar types of entertainment, users have little incentive to remain loyal to any given platform if it exhibits performance issues. Big names in streaming like Hulu, Amazon Prime and HBO Max invest thousands of hours into engineering observability and closed-loop monitoring to combat infrastructure and application issues, but smaller platforms struggle to remain competitive without access to the same resources ...

February 27, 2024

Generative AI has recently experienced unprecedented dramatic growth, making it one of the most exciting transformations the tech industry has seen in some time. However, this growth also poses a challenge for tech leaders who will be expected to deliver on the promise of new technology. In 2024, delivering tangible outcomes that meet the potential of AI, and setting up incubator projects for the future will be key tasks ...

February 26, 2024

SAP is a tool for automating business processes. Managing SAP solutions, especially with the shift to the cloud-based S/4HANA platform, can be intricate. To explore the concerns of SAP users during operational transformations and automation, a survey was conducted in mid-2023 by Digitate and Americas' SAP Users' Group ...

February 22, 2024

Some companies are just starting to dip their toes into developing AI capabilities, while (few) others can claim they have built a truly AI-first product. Regardless of where a company is on the AI journey, leaders must understand what it means to build every aspect of their product with AI in mind ...

February 21, 2024

Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...

February 20, 2024

In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...

February 16, 2024

In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...

February 15, 2024

In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...