Skip to main content

20 Technologies to Support APM - Part 3

APMdigest continues the list, cataloging the many valuable tools available – beyond what is technically categorized as Application Performance Management (APM) – to support the goals of improving application performance and business service.

Start with Part 1

Start with Part 2

11. Network Performance Monitoring (NPM)

The performance and availability of the network is an essential factor in whether applications meet employee expectations. The rapid pace of innovation in mobile technology means that ensuring adequate network performance is becoming increasingly important. Therefore investing in a good network performance monitoring solution that is able to perform packet capture and analysis at a minimum in relation to the applications served is important and will enrich your APM strategy.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

12. Application-Aware Network Performance Monitoring (AA-NPM)

The challenge is that APM has evolved into a mosaic of monitoring tools, analytic engines, and event processors that provide many solutions to different problems. When you step back and look at the big picture it all comes into focus, but when you're trying to rationalize one technology over another, things aren't so clear at close range. I have found that the simplicity and ease of use with agentless monitoring (i.e. wire data analytics) is a great place to start. You may also hear the terms Application Aware Infrastructure Performance Monitoring (AA-IPM) or Application Aware Network Performance Monitoring (AA-NPM) both of which are complimentary to APM and I believe to be an essential part of an overall APM solution.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

13. Deep Packet Inspection (DPI)

When it comes to APM, Deep Packet Inspection (DPI) isn't the first thing that comes to mind, but it should be, and we consider it a must-have in supporting APM. The general consensus seems to be that flow-based technologies (NetFlow, sFlow, IPFIX, etc.) provide enough visibility regarding communication, and end-point solutions provide the details from the client point of view. But network and application analysis based on DPI can provide all this and more. DPI provides definitive latency measurements, and it quickly allows analysts to isolate the problem to the network or the application. Once isolated, payload information from packets in the communication path can provide insights that no other solution can – like error messages that are being returned but not correctly processed by applications. And when combined with network forensics (storing packets for detailed, post-incident analysis), critical application transactions can be unequivocally verified from days or even weeks ago, something that is not available in any other form of APM solution.
Jay Botelho
Director of Product Management, WildPackets

14. Network Packet Recording

Something that all enterprises should seek out is accurate network packet recording. It's imperative to have a solution that can capture, index and record network traffic with continuous 100% accuracy even during unpredictable traffic spikes. Accurate network packet recording enables IT teams to troubleshoot and diagnose network and application performance issues as soon as they arise, and help security teams investigate and contain security problems and help risk and compliance teams do their jobs. Operations teams can determine whether the problems reside within the IT infrastructure or within the applications running on the network – reducing time-to-resolution (TTR) and lowering operational expenditures (OPEX). Traditional detection tools won't cut it in an era where millions of dollars in revenue can be lost with milliseconds of downtime – the key is maintaining a network infrastructure that delivers continuous historical network visibility.
Mike Heumann
Sr. Director, Marketing (Endace), Emulex

15. Network Emulation

Network Emulation is a must have. The first part of an APM cycle is to ensure that applications are designed/suitable for the deployed environment. The Network (Mobile, WAN, Internet...) is a critical but often ignored component of this. One reason is the complexity of going about verifying applications in real world networks, however Network Emulation makes this easy by providing the ability to replicate the complete network environment. By re-creating all real world network conditions (restricted bandwidth, latency, loss, QoS etc), Network Emulation gives organizations an accurate assessment of whether an application is suitable for them, long before they try to manage, with APM, the unmanageable.
Jim Swepson
Pre-sales Technologist, iTrinegy

20 Technologies to Support APM - Part 4

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

20 Technologies to Support APM - Part 3

APMdigest continues the list, cataloging the many valuable tools available – beyond what is technically categorized as Application Performance Management (APM) – to support the goals of improving application performance and business service.

Start with Part 1

Start with Part 2

11. Network Performance Monitoring (NPM)

The performance and availability of the network is an essential factor in whether applications meet employee expectations. The rapid pace of innovation in mobile technology means that ensuring adequate network performance is becoming increasingly important. Therefore investing in a good network performance monitoring solution that is able to perform packet capture and analysis at a minimum in relation to the applications served is important and will enrich your APM strategy.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

12. Application-Aware Network Performance Monitoring (AA-NPM)

The challenge is that APM has evolved into a mosaic of monitoring tools, analytic engines, and event processors that provide many solutions to different problems. When you step back and look at the big picture it all comes into focus, but when you're trying to rationalize one technology over another, things aren't so clear at close range. I have found that the simplicity and ease of use with agentless monitoring (i.e. wire data analytics) is a great place to start. You may also hear the terms Application Aware Infrastructure Performance Monitoring (AA-IPM) or Application Aware Network Performance Monitoring (AA-NPM) both of which are complimentary to APM and I believe to be an essential part of an overall APM solution.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

13. Deep Packet Inspection (DPI)

When it comes to APM, Deep Packet Inspection (DPI) isn't the first thing that comes to mind, but it should be, and we consider it a must-have in supporting APM. The general consensus seems to be that flow-based technologies (NetFlow, sFlow, IPFIX, etc.) provide enough visibility regarding communication, and end-point solutions provide the details from the client point of view. But network and application analysis based on DPI can provide all this and more. DPI provides definitive latency measurements, and it quickly allows analysts to isolate the problem to the network or the application. Once isolated, payload information from packets in the communication path can provide insights that no other solution can – like error messages that are being returned but not correctly processed by applications. And when combined with network forensics (storing packets for detailed, post-incident analysis), critical application transactions can be unequivocally verified from days or even weeks ago, something that is not available in any other form of APM solution.
Jay Botelho
Director of Product Management, WildPackets

14. Network Packet Recording

Something that all enterprises should seek out is accurate network packet recording. It's imperative to have a solution that can capture, index and record network traffic with continuous 100% accuracy even during unpredictable traffic spikes. Accurate network packet recording enables IT teams to troubleshoot and diagnose network and application performance issues as soon as they arise, and help security teams investigate and contain security problems and help risk and compliance teams do their jobs. Operations teams can determine whether the problems reside within the IT infrastructure or within the applications running on the network – reducing time-to-resolution (TTR) and lowering operational expenditures (OPEX). Traditional detection tools won't cut it in an era where millions of dollars in revenue can be lost with milliseconds of downtime – the key is maintaining a network infrastructure that delivers continuous historical network visibility.
Mike Heumann
Sr. Director, Marketing (Endace), Emulex

15. Network Emulation

Network Emulation is a must have. The first part of an APM cycle is to ensure that applications are designed/suitable for the deployed environment. The Network (Mobile, WAN, Internet...) is a critical but often ignored component of this. One reason is the complexity of going about verifying applications in real world networks, however Network Emulation makes this easy by providing the ability to replicate the complete network environment. By re-creating all real world network conditions (restricted bandwidth, latency, loss, QoS etc), Network Emulation gives organizations an accurate assessment of whether an application is suitable for them, long before they try to manage, with APM, the unmanageable.
Jim Swepson
Pre-sales Technologist, iTrinegy

20 Technologies to Support APM - Part 4

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...