What is Network Visibility for APM and NPM?
October 26, 2017

Keith Bromley
Ixia

Share this

Most everyone in IT has heard about network performance monitoring (NPM) and application performance monitoring (APM) tools. But what are the real benefits? For instance, what kind of information do I really get and is it worth the investment? Also, what about the complexity involved with these types of solutions?

The answer boils down to implementation. Essentially, did you install a visibility architecture first (so that you can optimize the flow of information to APM and NPM tools), or did you just add point solutions for APM and NPM? This answer will determine the effectiveness of your application monitoring solutions.

Any performance monitoring solution is only as good as the quality of data feeding the tools

The visibility architecture concept is extremely important because it organizes the flow of information to security and monitoring tools. Without it, you don’t know what the quality and integrity of the input data to the tools is. A visibility architecture delivers an end-to-end infrastructure which enables physical and virtual network, application, and security visibility. Specifically, network packet brokers can be included in the architecture to parse the requisite data needed and distribute that data to one or more application monitoring tools.

Once the network packet brokers are installed, it makes it much easier for the APM and NPM solutions to optimize your performance. In addition to these tools, other capabilities, like application intelligence and proactive network monitoring, can be installed as part of the visibility architecture to further increase the range of capabilities.

Here are some example use cases of what you can accomplish when a visibility architecture is combined with performance monitoring tools:

■ NPM solutions can be used to improve the quality of service (QoS) on the network and optimize the network service level agreement (SLA) performance.

■ APM solutions can be used to improve quality of experience (QoE) and optimize SLA performance for network applications, i.e. capture data that can be used by an APM tool to observe and diagnose application slowness.

■ APM tools can be used to analyze user behaviors.

■ Application intelligence can be used to identify slow or underperforming applications and network bottlenecks.

■ Proactive monitoring can be used to provide better and faster network rollouts by pre-testing the network with synthetic traffic to understand how it performs against either specific application traffic or a combination of traffic types.

■ Proactive troubleshooting can be combined with application intelligence to help you more quickly anticipate where network and application problems may be coming from.

■ It is also possible to prevent applications from overloading the network bandwidth by using application intelligence to “see” application growth on the bandwidth in real-time and prevent catastrophic events.

■ You can also conduct inline network performance monitoring to optimize A/B traffic route flows (i.e. investigate path latency and performance problems)

In the end, any performance monitoring solution is only as good as the quality of data feeding the tools. Extraneous and duplicate data will affect the ability and speed of monitoring tools to come to an accurate analysis. A few minutes of time to understand your network visibility and the types of blind you have might be well worth it.

Keith Bromley is Senior Manager, Solutions Marketing at Ixia Solutions Group, a Keysight Technologies business
Share this

The Latest

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...

July 10, 2024

Containers are a common theme of wasted spend among organizations, according to the State of Cloud Costs 2024 report from Datadog. In fact, 83% of container costs were associated with idle resources ...