Skip to main content

How to Ensure APM Success

Keith Bromley

A recent APMdigest blog by Jean Tunis, The Evolving Needs of Application Performance Monitoring - Part 2, provided an excellent background on Application Performance Monitoring (APM) and what it does. APM solution benefits are much more understood than in years past. An interesting data point from Gartner Inc. mentioned in the article confirms this, stating that IT departments are planning to increase the use of APM solutions to monitor their applications from 5% in 2018 to a projected 20% in 2021.

A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data. Irrelevant data, fragmented data, and corrupt data are all common culprits that either end up decreasing the speed to resolution, or prevent problem resolution altogether, by APM solutions.

There are two easy activities you can conduct to increase the quality of the input data to your APM tool. First, install taps to collect monitoring data. Taps can be installed anywhere across your network. This lets you collect ingress/egress traffic to your network, data to/from remote branch offices, and data from anywhere across the network that you think might be experiencing some sort of issue.

Taps deliver the ultimate experience in flexibility. In contrast, SPAN and mirroring ports off of your Layer 2 and 3 switches do not have that same flexibility. For instance, placing switches all over your network to capture data is unnecessary and expensive. In addition, mirroring ports can drop data, especially in CPU overload situations. When it comes to troubleshooting and performance monitoring, you need every piece of relevant data, not just portions of relevant data.

Secondly, you need to deploy a network packet broker (NPB) in your network. The function of the NPB is to aggregate monitoring data from across your network, filter that data based upon the criteria you are looking for, and remove unnecessary, duplicate copies of the data. Once this is accomplished, the NPB forwards the data onto your APM solution. The NPB may reduce the traffic sent to your APM solution by 50% or more; making your APM solution that much more effective and potentially reduce your future APM tool costs.

Something else to consider is that the tap and NPB concept can be used in cloud solutions as well. This means you can deploy the concept for both physical on-premises and virtual network. This is especially important for hybrid cloud (mixture of physical on-premises and public/private cloud) scenarios that are prevalent in today’s enterprise networks. This mixture of different network types can be a significant problem that is easily remedied with a tap, virtual tap, and NPB approach.

In the end, APM solutions are a critical component to troubleshooting and performance monitoring, but you need to make sure that the APM solution is getting the right data.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

How to Ensure APM Success

Keith Bromley

A recent APMdigest blog by Jean Tunis, The Evolving Needs of Application Performance Monitoring - Part 2, provided an excellent background on Application Performance Monitoring (APM) and what it does. APM solution benefits are much more understood than in years past. An interesting data point from Gartner Inc. mentioned in the article confirms this, stating that IT departments are planning to increase the use of APM solutions to monitor their applications from 5% in 2018 to a projected 20% in 2021.

A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data. Irrelevant data, fragmented data, and corrupt data are all common culprits that either end up decreasing the speed to resolution, or prevent problem resolution altogether, by APM solutions.

There are two easy activities you can conduct to increase the quality of the input data to your APM tool. First, install taps to collect monitoring data. Taps can be installed anywhere across your network. This lets you collect ingress/egress traffic to your network, data to/from remote branch offices, and data from anywhere across the network that you think might be experiencing some sort of issue.

Taps deliver the ultimate experience in flexibility. In contrast, SPAN and mirroring ports off of your Layer 2 and 3 switches do not have that same flexibility. For instance, placing switches all over your network to capture data is unnecessary and expensive. In addition, mirroring ports can drop data, especially in CPU overload situations. When it comes to troubleshooting and performance monitoring, you need every piece of relevant data, not just portions of relevant data.

Secondly, you need to deploy a network packet broker (NPB) in your network. The function of the NPB is to aggregate monitoring data from across your network, filter that data based upon the criteria you are looking for, and remove unnecessary, duplicate copies of the data. Once this is accomplished, the NPB forwards the data onto your APM solution. The NPB may reduce the traffic sent to your APM solution by 50% or more; making your APM solution that much more effective and potentially reduce your future APM tool costs.

Something else to consider is that the tap and NPB concept can be used in cloud solutions as well. This means you can deploy the concept for both physical on-premises and virtual network. This is especially important for hybrid cloud (mixture of physical on-premises and public/private cloud) scenarios that are prevalent in today’s enterprise networks. This mixture of different network types can be a significant problem that is easily remedied with a tap, virtual tap, and NPB approach.

In the end, APM solutions are a critical component to troubleshooting and performance monitoring, but you need to make sure that the APM solution is getting the right data.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...