Skip to main content

What You Should Be Monitoring to Ensure Digital Performance - Part 4

APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 4 covers the infrastructure, including the cloud and the network.

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 2

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 3

DATA

Data performance monitoring is the most important aspect to ensure digital performance. Data transformed into content will dictate the end user experience. Data represented as text, images, video or voxels in extended reality requires continual monitoring to ensure quality of experience. IT departments can determine the amount of investment required to modify the network and application components based upon data performance. Data visualization formats can also be modified to function on the status quo infrastructure until the upgrade investments are in place.
Dos Dosanjh
Director, Technical Marketing, Quali

Monitor slow flow data - collected by standard discovery tools to map changes in network, apps, data, location or users. And fast flow data - collected by log analyzers, webcasters, and real time discovery to overlay changes to dependency map.
Jeanne Morain
Author and Strategist, iSpeak Cloud

BIG DATA TECHNOLOGIES

Organizations need to be able to monitor the big data technology that modern applications are increasingly reliant upon. These apps need fast access to technologies such as Hadoop, Kafka, Spark and Hbase, in order to make business critical decisions across all verticals, including financial services, retail, manufacturing, healthcare and telecom. For example, in finance, fraud detection leverages streaming data from systems like Kafka and Spark Streaming to collect and process information in order to detect any irregular patterns and prevent fraudulent transactions. Streaming apps like these have complex distributed architectures and produce massive volumes of data that is constantly changing. This makes them susceptible to performance issues, jeopardizing the important business process they were supporting. It's critical that enterprises monitor these modern data apps with a strong application performance management (APM) platform that has end-to-end observability and AI-driven automation at the core.
Kunal Agarwal
CEO, Unravel Data

MAINFRAME

Ever since servers of all types began processing transactions between businesses and the web/mobile devices, the need for millisecond performance between the end-user and the mainframe back-end data repository has existed. Along this path, there are numerous moving parts that contribute to a delightful or disastrous user experience. Focus must be given to monitoring mainframe performance as most web and mobile applications end with a purchase, a bank account deposit, or some exchange that ultimately takes place on a back-end mainframe.
Kelly Vogt
Performance Consultant, Compuware

MIDDLEWARE

One component that people often miss: the middleware. Whether it be on-premises ESBs, cloud-based iPaaS, or some combination, if the middleware has an issue, it can adversely impact the customer experience.
Jason Bloomberg
President, Intellyx

INTERACTIONS BETWEEN CLOUDS

As organizations increasingly adopt multi-cloud strategies, there's a growing need to monitor not just the performance (speed, reliability) of individual cloud infrastructures themselves, but also the interactions between these platforms. When deploying a multi-cloud environment, fast, reliable interoperability between multiple cloud regions and providers can be the key to strong performance for entire end-to-end componentized applications.
Mehdi Daoudi
CEO and Founder, Catchpoint

NETWORK

Digital performance is a full-stack affair, so it's essential to monitor the network at the wire level all the way to the applications and real user experience.
Jason Bloomberg
President, Intellyx

The most important metric for IT teams to monitor is the performance of the network.
Douglas Roberts
VP and General Manager, Viavi Enterprise and Cloud Business Unit

Digital performance monitoring requires complete network visibility to expose hidden problems and soon-to-be problems.
Keith Bromley
Senior Manager, Solutions Marketing, Ixia Solutions Group a Keysight Technologies business

Networks can be needy: They often require constant attention to ensure continuous uptime and properly defend against cyberattacks. Traditional symptom-based SNMP monitoring isn't enough to ensure (or enhance) digital performance – IT teams need to leverage proactive network monitoring and contextualized visibility. Waiting for a problem to occur and then trying to track down the source with an outdated map or protocols results in more time troubleshooting and increased MTTR, leaving less time and brain space for making strategic updates or otherwise optimizing digital performance. Instead of always operating in crisis mode, network teams need to employ network automation to continuously monitor for underlying faults, identify problems in context to speed recovery and proactively enforce best practices.
Jason Baudreau
Product Specialist, NetBrain

SD-WAN

Ensuring digital performance today can be a proxy for keeping employees productive. When networks are slow, people are slow. The adoption of SD-WAN is enticing for large enterprises seeking to lower costs or increase flexibility for remote locations, but it's not a silver bullet and what's missing from this discussion is end-to-end performance baselining. SD-WAN has no effect on issues outside of the WAN and the application delivery path is constantly changing so monitoring before, during, and after SD-WAN deployment across the entire connection is essential to finding and fixing issues that degrade user experience no matter where the issue occurs.
Sean Armstrong
VP of Product, AppNeta

Read What You Should Be Monitoring to Ensure Digital Performance - Part 5, the final installment, with some recommendations you may not have thought about.

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

What You Should Be Monitoring to Ensure Digital Performance - Part 4

APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 4 covers the infrastructure, including the cloud and the network.

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 2

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 3

DATA

Data performance monitoring is the most important aspect to ensure digital performance. Data transformed into content will dictate the end user experience. Data represented as text, images, video or voxels in extended reality requires continual monitoring to ensure quality of experience. IT departments can determine the amount of investment required to modify the network and application components based upon data performance. Data visualization formats can also be modified to function on the status quo infrastructure until the upgrade investments are in place.
Dos Dosanjh
Director, Technical Marketing, Quali

Monitor slow flow data - collected by standard discovery tools to map changes in network, apps, data, location or users. And fast flow data - collected by log analyzers, webcasters, and real time discovery to overlay changes to dependency map.
Jeanne Morain
Author and Strategist, iSpeak Cloud

BIG DATA TECHNOLOGIES

Organizations need to be able to monitor the big data technology that modern applications are increasingly reliant upon. These apps need fast access to technologies such as Hadoop, Kafka, Spark and Hbase, in order to make business critical decisions across all verticals, including financial services, retail, manufacturing, healthcare and telecom. For example, in finance, fraud detection leverages streaming data from systems like Kafka and Spark Streaming to collect and process information in order to detect any irregular patterns and prevent fraudulent transactions. Streaming apps like these have complex distributed architectures and produce massive volumes of data that is constantly changing. This makes them susceptible to performance issues, jeopardizing the important business process they were supporting. It's critical that enterprises monitor these modern data apps with a strong application performance management (APM) platform that has end-to-end observability and AI-driven automation at the core.
Kunal Agarwal
CEO, Unravel Data

MAINFRAME

Ever since servers of all types began processing transactions between businesses and the web/mobile devices, the need for millisecond performance between the end-user and the mainframe back-end data repository has existed. Along this path, there are numerous moving parts that contribute to a delightful or disastrous user experience. Focus must be given to monitoring mainframe performance as most web and mobile applications end with a purchase, a bank account deposit, or some exchange that ultimately takes place on a back-end mainframe.
Kelly Vogt
Performance Consultant, Compuware

MIDDLEWARE

One component that people often miss: the middleware. Whether it be on-premises ESBs, cloud-based iPaaS, or some combination, if the middleware has an issue, it can adversely impact the customer experience.
Jason Bloomberg
President, Intellyx

INTERACTIONS BETWEEN CLOUDS

As organizations increasingly adopt multi-cloud strategies, there's a growing need to monitor not just the performance (speed, reliability) of individual cloud infrastructures themselves, but also the interactions between these platforms. When deploying a multi-cloud environment, fast, reliable interoperability between multiple cloud regions and providers can be the key to strong performance for entire end-to-end componentized applications.
Mehdi Daoudi
CEO and Founder, Catchpoint

NETWORK

Digital performance is a full-stack affair, so it's essential to monitor the network at the wire level all the way to the applications and real user experience.
Jason Bloomberg
President, Intellyx

The most important metric for IT teams to monitor is the performance of the network.
Douglas Roberts
VP and General Manager, Viavi Enterprise and Cloud Business Unit

Digital performance monitoring requires complete network visibility to expose hidden problems and soon-to-be problems.
Keith Bromley
Senior Manager, Solutions Marketing, Ixia Solutions Group a Keysight Technologies business

Networks can be needy: They often require constant attention to ensure continuous uptime and properly defend against cyberattacks. Traditional symptom-based SNMP monitoring isn't enough to ensure (or enhance) digital performance – IT teams need to leverage proactive network monitoring and contextualized visibility. Waiting for a problem to occur and then trying to track down the source with an outdated map or protocols results in more time troubleshooting and increased MTTR, leaving less time and brain space for making strategic updates or otherwise optimizing digital performance. Instead of always operating in crisis mode, network teams need to employ network automation to continuously monitor for underlying faults, identify problems in context to speed recovery and proactively enforce best practices.
Jason Baudreau
Product Specialist, NetBrain

SD-WAN

Ensuring digital performance today can be a proxy for keeping employees productive. When networks are slow, people are slow. The adoption of SD-WAN is enticing for large enterprises seeking to lower costs or increase flexibility for remote locations, but it's not a silver bullet and what's missing from this discussion is end-to-end performance baselining. SD-WAN has no effect on issues outside of the WAN and the application delivery path is constantly changing so monitoring before, during, and after SD-WAN deployment across the entire connection is essential to finding and fixing issues that degrade user experience no matter where the issue occurs.
Sean Armstrong
VP of Product, AppNeta

Read What You Should Be Monitoring to Ensure Digital Performance - Part 5, the final installment, with some recommendations you may not have thought about.

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...