Skip to main content

Beyond Performance Monitoring

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Beyond Performance Monitoring

It is no longer good enough to know that application performance is degrading - enterprises also need to be able to make the infrastructure and application adjustments necessary to avoid performance issues in the first place.

It's certainly obvious that enterprises have embraced virtualization as a way to help consolidate and to bring more agility to IT infrastructures. And over the next few years, more enterprises will move to cloud computing for many of the same reasons. What's not so obvious is that successful management of application performance is growing more difficult. In fact, because of all of the transaction interdependencies across the infrastructure – whether virtualized, physical and on-premise, or cloud – understanding the actual quality of application performance, from an end user's perspective, is more challenging that ever before.

The Shift from Performance Monitoring to Performance Management

Consider the typical contemporary infrastructure that a transaction may traverse: There is the client; firewalls, load balancers, web servers and application servers; external web service producers; gateway servers, grid servers, message busses, message brokers, ESB servers and perhaps a mainframe; databases, the network layer, and all of the associated equipment; as well as vast storage networks. Also, more often than ever, transactions now are dependent on third-party Web services providers or cloud services. And all of this infrastructure may be attempting to serve a request issued from a PC, tablet, smartphone or a Web service consumer. The takeaway? When service levels degrade, the resolution can't come fast enough. And there's simply no time to manually track down what aspect of the infrastructure is the cause of the trouble.

That's why, when managing end-user experience, waiting for alerts to come in when applications actually start to falter is not an effective strategy. By then it’s simply too late. It'll simply take too long to determine the root cause of the problem; SLAs will go unmet and applications will continue to degrade and even cease to function. Worse, the business risks losing customers who move away from slow, underperforming websites. IT teams need light shed onto the actual trouble spots. It takes a step forward from passive monitoring to active management that includes the ability to fix problems before they arise.

Unfortunately, most performance monitors today don't provide proactive management capabilities. They're constrained to monitoring and alerting IT teams when performance levels drop to, or below, certain thresholds. Yet, alerting is the easy part. The key question is: Does the performance monitoring tool provide the insight into what happened before service degraded and before any alerts were issued? And does it point to the true cause of performance degradation that affects user experience?

Ultimately, what is needed to achieve that level of performance management is the integration of performance monitoring with transaction analysis that drills deep into the data center.

End-User Experience Monitoring Meets Business Transaction Management

None of this is to say that alerting application owners and IT teams when performance issues arise isn't still very important – it is. The point is that issuing alerts should be considered the fallback plan, not the ideal. However, there's an exponential increase in the power of performance monitoring when combined with Business Transaction Management (BTM). Essentially, BTM follows every single business transaction as it moves through every tier of an organization’s IT infrastructure to provide greater understanding of the service quality, flow, and dependencies among both front-end and back-end tiers throughout an entire transaction lifecycle. When combined with performance monitoring, especially from the perspective of the end user, IT teams go from merely being able to identify that there is a performance problem to getting the insight needed to focus on the precise cause – and they now can do this before users notice and business is impacted.

The benefits gained by combining end-user experience and performance monitoring and BTM are immediate and quantifiable. This comprehensive view of transaction flow also enables IT planners to see how user behavior is affected through changes in infrastructure capacity - so it becomes clear when it is necessary to add more servers, change application code, or help to identify more cost-effective options such as consolidating IT resources.

There also is a higher success rate in preventing performance problems before users are impacted, reduced mean time-to-repair, and increased efficiency when it comes to rolling new applications and updates into production. Most important, this combination makes it possible to improve business processes that can be measured directly, such as shorter release cycles, reduced cost per transaction, and reduced transaction failure (e.g. order fallout). Also, less revenue is lost due to poor application performance, increased customer satisfaction, employee productivity, and improved brand image.

This combination of end-user experience monitoring and BTM also helps to prepare the enterprise for the shift to the cloud. It goes without saying that when the move to the cloud is made, there are significant changes to the infrastructure. Consider the adoption of a private cloud: how many physical servers will be needed to support the cloud? How many virtual servers will need to be in place? What is the most cost-effective way to architect the cloud to maximize the end-user experience?

There is no sense in obtaining the cost savings and agility of a private cloud if application performance degrades and hurts productivity. Making the most efficient decisions requires the capture of all end-user transactions and accurate measurement of the end-user experience so that the cloud can be built to reap the potential cost savings without compromising service levels and overall performance.

To navigate effectively through today's multifaceted IT infrastructures, enterprises need the insight and ability to take a proactive approach to service management, and that only is possible with the integration of Business Transaction Management and end-user experience monitoring. Such integration enables organizations to collect actionable performance information because it doesn't matter if the infrastructure is physical, virtualized, cloud-based – or a combination of them all. It comes down to providing dependable, fast transactions that meet or exceed SLAs and user expectations.

Russell Rothstein is Founder and CEO, IT Central Station.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...