Skip to main content

20 Technologies to Support APM - Part 3

APMdigest continues the list, cataloging the many valuable tools available – beyond what is technically categorized as Application Performance Management (APM) – to support the goals of improving application performance and business service.

Start with Part 1

Start with Part 2

11. Network Performance Monitoring (NPM)

The performance and availability of the network is an essential factor in whether applications meet employee expectations. The rapid pace of innovation in mobile technology means that ensuring adequate network performance is becoming increasingly important. Therefore investing in a good network performance monitoring solution that is able to perform packet capture and analysis at a minimum in relation to the applications served is important and will enrich your APM strategy.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

12. Application-Aware Network Performance Monitoring (AA-NPM)

The challenge is that APM has evolved into a mosaic of monitoring tools, analytic engines, and event processors that provide many solutions to different problems. When you step back and look at the big picture it all comes into focus, but when you're trying to rationalize one technology over another, things aren't so clear at close range. I have found that the simplicity and ease of use with agentless monitoring (i.e. wire data analytics) is a great place to start. You may also hear the terms Application Aware Infrastructure Performance Monitoring (AA-IPM) or Application Aware Network Performance Monitoring (AA-NPM) both of which are complimentary to APM and I believe to be an essential part of an overall APM solution.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

13. Deep Packet Inspection (DPI)

When it comes to APM, Deep Packet Inspection (DPI) isn't the first thing that comes to mind, but it should be, and we consider it a must-have in supporting APM. The general consensus seems to be that flow-based technologies (NetFlow, sFlow, IPFIX, etc.) provide enough visibility regarding communication, and end-point solutions provide the details from the client point of view. But network and application analysis based on DPI can provide all this and more. DPI provides definitive latency measurements, and it quickly allows analysts to isolate the problem to the network or the application. Once isolated, payload information from packets in the communication path can provide insights that no other solution can – like error messages that are being returned but not correctly processed by applications. And when combined with network forensics (storing packets for detailed, post-incident analysis), critical application transactions can be unequivocally verified from days or even weeks ago, something that is not available in any other form of APM solution.
Jay Botelho
Director of Product Management, WildPackets

14. Network Packet Recording

Something that all enterprises should seek out is accurate network packet recording. It's imperative to have a solution that can capture, index and record network traffic with continuous 100% accuracy even during unpredictable traffic spikes. Accurate network packet recording enables IT teams to troubleshoot and diagnose network and application performance issues as soon as they arise, and help security teams investigate and contain security problems and help risk and compliance teams do their jobs. Operations teams can determine whether the problems reside within the IT infrastructure or within the applications running on the network – reducing time-to-resolution (TTR) and lowering operational expenditures (OPEX). Traditional detection tools won't cut it in an era where millions of dollars in revenue can be lost with milliseconds of downtime – the key is maintaining a network infrastructure that delivers continuous historical network visibility.
Mike Heumann
Sr. Director, Marketing (Endace), Emulex

15. Network Emulation

Network Emulation is a must have. The first part of an APM cycle is to ensure that applications are designed/suitable for the deployed environment. The Network (Mobile, WAN, Internet...) is a critical but often ignored component of this. One reason is the complexity of going about verifying applications in real world networks, however Network Emulation makes this easy by providing the ability to replicate the complete network environment. By re-creating all real world network conditions (restricted bandwidth, latency, loss, QoS etc), Network Emulation gives organizations an accurate assessment of whether an application is suitable for them, long before they try to manage, with APM, the unmanageable.
Jim Swepson
Pre-sales Technologist, iTrinegy

20 Technologies to Support APM - Part 4

Hot Topics

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

20 Technologies to Support APM - Part 3

APMdigest continues the list, cataloging the many valuable tools available – beyond what is technically categorized as Application Performance Management (APM) – to support the goals of improving application performance and business service.

Start with Part 1

Start with Part 2

11. Network Performance Monitoring (NPM)

The performance and availability of the network is an essential factor in whether applications meet employee expectations. The rapid pace of innovation in mobile technology means that ensuring adequate network performance is becoming increasingly important. Therefore investing in a good network performance monitoring solution that is able to perform packet capture and analysis at a minimum in relation to the applications served is important and will enrich your APM strategy.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

12. Application-Aware Network Performance Monitoring (AA-NPM)

The challenge is that APM has evolved into a mosaic of monitoring tools, analytic engines, and event processors that provide many solutions to different problems. When you step back and look at the big picture it all comes into focus, but when you're trying to rationalize one technology over another, things aren't so clear at close range. I have found that the simplicity and ease of use with agentless monitoring (i.e. wire data analytics) is a great place to start. You may also hear the terms Application Aware Infrastructure Performance Monitoring (AA-IPM) or Application Aware Network Performance Monitoring (AA-NPM) both of which are complimentary to APM and I believe to be an essential part of an overall APM solution.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

13. Deep Packet Inspection (DPI)

When it comes to APM, Deep Packet Inspection (DPI) isn't the first thing that comes to mind, but it should be, and we consider it a must-have in supporting APM. The general consensus seems to be that flow-based technologies (NetFlow, sFlow, IPFIX, etc.) provide enough visibility regarding communication, and end-point solutions provide the details from the client point of view. But network and application analysis based on DPI can provide all this and more. DPI provides definitive latency measurements, and it quickly allows analysts to isolate the problem to the network or the application. Once isolated, payload information from packets in the communication path can provide insights that no other solution can – like error messages that are being returned but not correctly processed by applications. And when combined with network forensics (storing packets for detailed, post-incident analysis), critical application transactions can be unequivocally verified from days or even weeks ago, something that is not available in any other form of APM solution.
Jay Botelho
Director of Product Management, WildPackets

14. Network Packet Recording

Something that all enterprises should seek out is accurate network packet recording. It's imperative to have a solution that can capture, index and record network traffic with continuous 100% accuracy even during unpredictable traffic spikes. Accurate network packet recording enables IT teams to troubleshoot and diagnose network and application performance issues as soon as they arise, and help security teams investigate and contain security problems and help risk and compliance teams do their jobs. Operations teams can determine whether the problems reside within the IT infrastructure or within the applications running on the network – reducing time-to-resolution (TTR) and lowering operational expenditures (OPEX). Traditional detection tools won't cut it in an era where millions of dollars in revenue can be lost with milliseconds of downtime – the key is maintaining a network infrastructure that delivers continuous historical network visibility.
Mike Heumann
Sr. Director, Marketing (Endace), Emulex

15. Network Emulation

Network Emulation is a must have. The first part of an APM cycle is to ensure that applications are designed/suitable for the deployed environment. The Network (Mobile, WAN, Internet...) is a critical but often ignored component of this. One reason is the complexity of going about verifying applications in real world networks, however Network Emulation makes this easy by providing the ability to replicate the complete network environment. By re-creating all real world network conditions (restricted bandwidth, latency, loss, QoS etc), Network Emulation gives organizations an accurate assessment of whether an application is suitable for them, long before they try to manage, with APM, the unmanageable.
Jim Swepson
Pre-sales Technologist, iTrinegy

20 Technologies to Support APM - Part 4

Hot Topics

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...