Skip to main content

Beyond the Perimeter: Why Application-Aware Network Monitoring Matters

Mark Troester
Progress

The modern world relies on applications: every business, regardless of industry, depends on them to varying degrees. Whether you operate a hospital, an e-commerce business, a farm or a factory, applications play a central role in day-to-day operations. Even a few minutes of application downtime can have disastrous consequences.

Consider this: according to a recent study, every minute of downtime costs businesses an average of $4,500, and outages typically last between 20 and 60 minutes. This means that at the very least, an outage can cost your company around $90,000 and potentially much more.

While most businesses focus on endpoint and perimeter protection to combat such incidents, there are many other factors that can disrupt an application beyond conventional perimeter breaches.

To effectively manage application experience (AX) and user experience (UX), businesses need greater visibility into the networks. This can be achieved through application-aware network performance monitoring (NPM) technologies.

How NPM Works

For many companies, application performance is a black box. They often become aware of issues only when complaints start pouring in, and even then, identifying the root cause can be time-consuming — a luxury most companies cannot afford.

NPM changes the game. It enables you to identify which applications are running below standard speed and measure response times for both the network and the application itself. This allows for quick differentiation between network delays and application delays when troubleshooting.

If the problem lies with the application, NPM provides IT teams with comprehensive information to resolve the issue. This includes response times for applications, transport times for the network, number of transactions, server response time, network transport time, responses in percentiles (maximum/minimum/average), number of concurrent users, and more.

Of course, not all applications require monitoring. NPM should be employed selectively, focusing on critical applications that are vital to daily business operations — such as customer-facing e-commerce applications. Additionally, it is crucial to integrate the information provided by NPM with the broader IT monitoring, management and surveillance ecosystem. In the realm of system security and operational efficiency, everything is interconnected.

NPM in Action

Let's consider a hypothetical scenario: you run a health insurance company. Thousands of users access your application daily to schedule doctor's appointments, make payments, and more. Suddenly seemingly out of nowhere, the application stops functioning. Complaints and calls flood in, and people are understandably frantic—after all, healthcare is of utmost importance.

At this point, many IT departments initiate the blame game or start grasping blindly for answers they know they may not find. However, with NPM in place, the next step is simple: consult the list.

What is the list? NPM solutions measure the response times and delays for every user-to-app transaction, aggregating them into a sortable list from slowest to fastest. This means that IT can swiftly identify the root cause of the problem and take immediate action to find a solution.

In 2023, seamlessness has become a basic consumer expectation. When a consumer places an online order, they expect to have the option of receiving it within a day or two. When they request a car, they expect it to arrive within minutes. And when they open an app, they expect it to launch within seconds — no more than one or two. Exceeding this threshold puts your company's reputation at stake. By implementing NPM, businesses can ensure that when application issues arise, they can promptly rectify them, keeping customers satisfied and preventing more severe consequences.

Mark Troester is VP of Strategy at Progress

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

Beyond the Perimeter: Why Application-Aware Network Monitoring Matters

Mark Troester
Progress

The modern world relies on applications: every business, regardless of industry, depends on them to varying degrees. Whether you operate a hospital, an e-commerce business, a farm or a factory, applications play a central role in day-to-day operations. Even a few minutes of application downtime can have disastrous consequences.

Consider this: according to a recent study, every minute of downtime costs businesses an average of $4,500, and outages typically last between 20 and 60 minutes. This means that at the very least, an outage can cost your company around $90,000 and potentially much more.

While most businesses focus on endpoint and perimeter protection to combat such incidents, there are many other factors that can disrupt an application beyond conventional perimeter breaches.

To effectively manage application experience (AX) and user experience (UX), businesses need greater visibility into the networks. This can be achieved through application-aware network performance monitoring (NPM) technologies.

How NPM Works

For many companies, application performance is a black box. They often become aware of issues only when complaints start pouring in, and even then, identifying the root cause can be time-consuming — a luxury most companies cannot afford.

NPM changes the game. It enables you to identify which applications are running below standard speed and measure response times for both the network and the application itself. This allows for quick differentiation between network delays and application delays when troubleshooting.

If the problem lies with the application, NPM provides IT teams with comprehensive information to resolve the issue. This includes response times for applications, transport times for the network, number of transactions, server response time, network transport time, responses in percentiles (maximum/minimum/average), number of concurrent users, and more.

Of course, not all applications require monitoring. NPM should be employed selectively, focusing on critical applications that are vital to daily business operations — such as customer-facing e-commerce applications. Additionally, it is crucial to integrate the information provided by NPM with the broader IT monitoring, management and surveillance ecosystem. In the realm of system security and operational efficiency, everything is interconnected.

NPM in Action

Let's consider a hypothetical scenario: you run a health insurance company. Thousands of users access your application daily to schedule doctor's appointments, make payments, and more. Suddenly seemingly out of nowhere, the application stops functioning. Complaints and calls flood in, and people are understandably frantic—after all, healthcare is of utmost importance.

At this point, many IT departments initiate the blame game or start grasping blindly for answers they know they may not find. However, with NPM in place, the next step is simple: consult the list.

What is the list? NPM solutions measure the response times and delays for every user-to-app transaction, aggregating them into a sortable list from slowest to fastest. This means that IT can swiftly identify the root cause of the problem and take immediate action to find a solution.

In 2023, seamlessness has become a basic consumer expectation. When a consumer places an online order, they expect to have the option of receiving it within a day or two. When they request a car, they expect it to arrive within minutes. And when they open an app, they expect it to launch within seconds — no more than one or two. Exceeding this threshold puts your company's reputation at stake. By implementing NPM, businesses can ensure that when application issues arise, they can promptly rectify them, keeping customers satisfied and preventing more severe consequences.

Mark Troester is VP of Strategy at Progress

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...