Skip to main content

What You Should Be Monitoring to Ensure Digital Performance - Part 2

APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 2 covers key performance metrics like availability and response time.

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1

AVAILABILITY

To ensure digital performance, availability is one of three key performance areas I always recommend monitoring. Your applications and networks must first be available to service users and customers. Otherwise, they're useful to no one.
Jean Tunis
Senior Consultant and Founder of RootPerformance

Monitoring the login page of an application with a synthetic transaction is an essential part of an Enterprise Monitoring strategy. Active monitoring is a good starting point to provide visibility on application availability especially when monitoring outside the Data Center. Synthetic Transactions can provide location-based availability and act as a barometer for measuring application performance.
Larry Dragich
Technology Executive and Founder of the APM Strategies Group on LinkedIn.

Read Larry Dragich's latest blog: Digital Intelligence - Why Traditional APM Tools Aren't Sufficient

Read Larry Dragich's new white paper: The Case for Converged Application & Infrastructure Performance Monitoring

INFRASTRUCTURE RISK

Understanding Infrastructure Risk is a key component of monitoring that most organizations miss. APM tools do a great job of tracking left-to-right performance across an application, and modern application designs ensure that no single component can cause a failure. Building an understanding of the risk inherent in the current IT infrastructure — below the application — is critical for stopping unexpected downtime and sudden capacity limits. You can do that by tracking links from between overlay and underlay networks, file systems to storage units, and hypervisor to server hardware — or you can use a unified monitoring tool do do it for you. Key buying decision — can you see the IT infrastructure risk for the specific components that your application relies on?
Kent Erickson
Alliance Strategist, Zenoss

THROUGHPUT

To ensure digital performance, throughput is one of three key performance areas that must be included. Applications and networks must be able to provide all the relevant data that is required to fulfill a specific request. Monitoring throughput ensures you know when your systems do not deliver all of the data that was requested, and you can act on it before the complaints come in.
Jean Tunis
Senior Consultant and Founder of RootPerformance

PAGE LOAD SPEED

Ultimately you want to be monitoring everything that impacts customer experience and conversion rates, but the most important thing is the page load speed. This drives more conversions than any other factor. And the key pages are those at the beginning of a user journey since the more time someone has invested in the process the less likely they are to abandon.
Antony Edwards
CTO, Eggplant

RESPONSE TIME

To ensure digital performance, response time is one of three key performance areas that must not be forgotten. Requests for specific information from users must be fulfilled with as much speed as possible. This is a common expectation of every IT system, so you should be monitoring them.
Jean Tunis
Senior Consultant and Founder of RootPerformance

Monitor application response from user to application (last mile) and application to the data (middle mile) to not only measure is the app up but of it is working.
Jeanne Morain
Author and Strategist, iSpeak Cloud

TRANSACTION UPTIME

A good starting point is to implement end-to-end performance monitoring with real transaction uptime to complement your APM tools.
Sven Hammar
Founder and CSO, Apica

TIME TO FIRST BYTE

Initial motivation in the user journey can be lost very quickly if for example the first time the user clicks on an advertisement or logs into an application is not performant. The appearance of performance is important; monitoring time to first byte (TTFB)can help ascertain the experience of what a user sees marching towards a minimum viable/viewable product (MVP) of the page or app before being loaded to completion. TTFB is a leading indicator on web performance to the end user and also is used by the leading search engines in factoring in page rank as the more performant pages get a higher rank.
Ravi Lachhman
Evangelist, AppDynamics

LOG EVENTS

If it has an IP address, it sends logs, and logs must be monitored to gain detailed insight on server performance, security, error messages or underlying issues.
Clayton Dukes
CEO, LogZilla

Logs have been around since the dawn of computing, but with constantly increasing threats, logs are more important than ever. Log events are one of the key data sources SIEM (Security Information and Event Management) solutions use for threat detection.
Otis Gospodnetić
Founder, Sematext

Read What You Should Be Monitoring to Ensure Digital Performance - Part 3, covering the development side.

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

What You Should Be Monitoring to Ensure Digital Performance - Part 2

APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 2 covers key performance metrics like availability and response time.

Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1

AVAILABILITY

To ensure digital performance, availability is one of three key performance areas I always recommend monitoring. Your applications and networks must first be available to service users and customers. Otherwise, they're useful to no one.
Jean Tunis
Senior Consultant and Founder of RootPerformance

Monitoring the login page of an application with a synthetic transaction is an essential part of an Enterprise Monitoring strategy. Active monitoring is a good starting point to provide visibility on application availability especially when monitoring outside the Data Center. Synthetic Transactions can provide location-based availability and act as a barometer for measuring application performance.
Larry Dragich
Technology Executive and Founder of the APM Strategies Group on LinkedIn.

Read Larry Dragich's latest blog: Digital Intelligence - Why Traditional APM Tools Aren't Sufficient

Read Larry Dragich's new white paper: The Case for Converged Application & Infrastructure Performance Monitoring

INFRASTRUCTURE RISK

Understanding Infrastructure Risk is a key component of monitoring that most organizations miss. APM tools do a great job of tracking left-to-right performance across an application, and modern application designs ensure that no single component can cause a failure. Building an understanding of the risk inherent in the current IT infrastructure — below the application — is critical for stopping unexpected downtime and sudden capacity limits. You can do that by tracking links from between overlay and underlay networks, file systems to storage units, and hypervisor to server hardware — or you can use a unified monitoring tool do do it for you. Key buying decision — can you see the IT infrastructure risk for the specific components that your application relies on?
Kent Erickson
Alliance Strategist, Zenoss

THROUGHPUT

To ensure digital performance, throughput is one of three key performance areas that must be included. Applications and networks must be able to provide all the relevant data that is required to fulfill a specific request. Monitoring throughput ensures you know when your systems do not deliver all of the data that was requested, and you can act on it before the complaints come in.
Jean Tunis
Senior Consultant and Founder of RootPerformance

PAGE LOAD SPEED

Ultimately you want to be monitoring everything that impacts customer experience and conversion rates, but the most important thing is the page load speed. This drives more conversions than any other factor. And the key pages are those at the beginning of a user journey since the more time someone has invested in the process the less likely they are to abandon.
Antony Edwards
CTO, Eggplant

RESPONSE TIME

To ensure digital performance, response time is one of three key performance areas that must not be forgotten. Requests for specific information from users must be fulfilled with as much speed as possible. This is a common expectation of every IT system, so you should be monitoring them.
Jean Tunis
Senior Consultant and Founder of RootPerformance

Monitor application response from user to application (last mile) and application to the data (middle mile) to not only measure is the app up but of it is working.
Jeanne Morain
Author and Strategist, iSpeak Cloud

TRANSACTION UPTIME

A good starting point is to implement end-to-end performance monitoring with real transaction uptime to complement your APM tools.
Sven Hammar
Founder and CSO, Apica

TIME TO FIRST BYTE

Initial motivation in the user journey can be lost very quickly if for example the first time the user clicks on an advertisement or logs into an application is not performant. The appearance of performance is important; monitoring time to first byte (TTFB)can help ascertain the experience of what a user sees marching towards a minimum viable/viewable product (MVP) of the page or app before being loaded to completion. TTFB is a leading indicator on web performance to the end user and also is used by the leading search engines in factoring in page rank as the more performant pages get a higher rank.
Ravi Lachhman
Evangelist, AppDynamics

LOG EVENTS

If it has an IP address, it sends logs, and logs must be monitored to gain detailed insight on server performance, security, error messages or underlying issues.
Clayton Dukes
CEO, LogZilla

Logs have been around since the dawn of computing, but with constantly increasing threats, logs are more important than ever. Log events are one of the key data sources SIEM (Security Information and Event Management) solutions use for threat detection.
Otis Gospodnetić
Founder, Sematext

Read What You Should Be Monitoring to Ensure Digital Performance - Part 3, covering the development side.

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...