Assuring Exceptional Experiences with Applications Requires Assuring Network Performance - Part 1
January 26, 2021

Nadeem Zahid
cPacket Networks

Share this

Network Performance Management and Diagnostics is an important aspect of Application Performance Management because application performance and experiences are intertwined with network performance. Networks connect end-users with applications; they also connect application components such as application servers and database servers, microservices, and IoT devices.

Experiences with enterprise and web-based (SaaS) applications by internal and external end-users directly impact an organization's success. These experiences may be formally specified with measurable metrics (for example, payment transaction response times) in a Service Level Experience (SLE). Externally, experiences impact customer satisfaction, retention and lifetime value. Within the organization, experiences affect employee satisfaction and productivity, including IT efficiency. Experiences also matter to automated processes, especially when specific timing tolerances are critical. Therefore, assuring exceptional experiences for all stakeholders and use cases is a critical success factor.


Frustration sets in for end-users who experience issues that are not proactively addressed. Customers may choose a competing service and internal customers will be less productive, resulting in a negative impact to the organization's top and bottom lines. IT personnel also get frustrated while troubleshooting and resolving issues under pressure. Proactively assuring performance using predictive and prescriptive analytics driven by data from monitoring is the ideal way to assure experiences because it averts poor experiences as well as time-consuming, costly and frustrating troubleshooting and problem solving.

Experiences with applications that are directly impacted by network performance can be grouped into the following three high-level categories:

Connectivity determines whether end-users and other processes including automation can access an application.

Responsiveness is either a quantitative or subjective measure of acceptability of the interactions with an application. For example, a target of receiving a response within one second is acceptable for many use cases.

Quality is another quantitative or subjective measure of acceptability. For example, a videoconference session that has delays, dropouts and other noticeable issues would be rated as poor quality.

Assuring Exceptional Experiences are Driving Performance Upgrades

High performance is often the way to assure responsiveness and quality. High performance often means increased processing speed that is reliant on data transmission speed, especially for processing intensive applications and streaming applications. Network throughput rates increase in steps. Currently the typical data rates are 10Gbps, 40Gbps, and 100Gbps. The need for performance and hence speed is driving upgrades of data center network data rates and corresponding monitoring to operate at 100Gbps.

High fidelity visibility and observability of the IT system's performance metrics are needed to manage and maximize user experiences. As data center networks continue migrating to 100Gbps data rates, monitoring resolution must keep pace.

Finding the Root Cause of Experience Issues

Customer support and IT help desks receive trouble tickets when performance issues occur. Tickets initiate an effort to resolve issues and start a timer that measures the mean time to resolution (MTTR) - a common metric used to gauge IT performance. Maintaining a low MTTR is a direct indicator of IT effectiveness and efficiency and an indirect indicator of customer satisfaction. The typical next steps include escalating the issue to specific roles and personnel within the IT team to isolate the root cause by first determining whether the problem is with the network or the application.

Investigating requires analyzing specific observable network and application behaviors and metrics. There are several entities and links between an end-user and an application that could cause connectivity issues if they malfunction. These include: the end-user's device, one or more networks (i.e., WAN, LAN, WLAN, DCN), the servers and other IT infrastructure hosting the application, and the application itself including underlying microservices and other software components.

Connectivity Issues

Let's look at a situation where network connectivity is inhibiting an employee's ability to access a custom application running within an organization's data center. The inability to access the application could be caused by a malfunction of the following connectivity stages:

■ Identity and Access Management

■ DHCP

■ DNS

■ Connectivity with the application server(s)

In such cases, investigator(s) should look at observable health and performance metrics in hopes of quickly isolating the problem. Using event logs, Ping, and Internet Control Message Protocol are quick ways to discover the root cause of connectivity issues. If no problems are found, the investigator(s) can dig deeper by analyzing network packet data to examine observed traffic and SYN/SYN ACK errors to determine if exchanges including TCP/IP handshakes at each of the connectivity stages listed above are working properly.

Go to: Assuring Exceptional Experiences with Applications Requires Assuring Network Performance - Part 2.

Nadeem Zahid is VP of Product Management & Marketing at cPacket Networks
Share this

The Latest

March 01, 2024

As organizations continue to navigate the complexities of the digital era, which has been marked by exponential advancements in AI and technology, the strategic deployment of modern, practical applications has become indispensable for sustaining competitive advantage and realizing business goals. The Info-Tech Research Group report, Applications Priorities 2024, explores the following five initiatives for emerging and leading-edge technologies and practices that can enable IT and applications leaders to optimize their application portfolio and improve on capabilities needed to meet the ambitions of their organizations ...

February 29, 2024

Despite the growth in popularity of artificial intelligence (AI) and ML across a number of industries, there is still a huge amount of unrealized potential, with many businesses playing catch-up and still planning how ML solutions can best facilitate processes. Further progression could be limited without investment in specialized technical teams to drive development and integration ...

February 28, 2024

With over 200 streaming services to choose from, including multiple platforms featuring similar types of entertainment, users have little incentive to remain loyal to any given platform if it exhibits performance issues. Big names in streaming like Hulu, Amazon Prime and HBO Max invest thousands of hours into engineering observability and closed-loop monitoring to combat infrastructure and application issues, but smaller platforms struggle to remain competitive without access to the same resources ...

February 27, 2024

Generative AI has recently experienced unprecedented dramatic growth, making it one of the most exciting transformations the tech industry has seen in some time. However, this growth also poses a challenge for tech leaders who will be expected to deliver on the promise of new technology. In 2024, delivering tangible outcomes that meet the potential of AI, and setting up incubator projects for the future will be key tasks ...

February 26, 2024

SAP is a tool for automating business processes. Managing SAP solutions, especially with the shift to the cloud-based S/4HANA platform, can be intricate. To explore the concerns of SAP users during operational transformations and automation, a survey was conducted in mid-2023 by Digitate and Americas' SAP Users' Group ...

February 22, 2024

Some companies are just starting to dip their toes into developing AI capabilities, while (few) others can claim they have built a truly AI-first product. Regardless of where a company is on the AI journey, leaders must understand what it means to build every aspect of their product with AI in mind ...

February 21, 2024

Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...

February 20, 2024

In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...

February 16, 2024

In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...

February 15, 2024

In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...