Skip to main content

Business Service Reliability: Correlating the Customer Experience and Business Outcomes

A recent poll on Business Service Reliability (BSR) found that IT needs to put an increased focus on managing and measuring the customer experience to improve business outcomes.

The poll — conducted by IDG Research Services on behalf of CA Technologies — sought to determine how organizations measure both BSR and the customer experience that IT provides. Business Service Reliability is a new approach to helping IT transform by providing a clearly-defined framework for managing and measuring customer interactions.

The majority of respondents (58 percent) are using a combination of surveys and other metrics (e.g. application downtime and call-center volume) to measure the customer experience.

Just over one quarter (26 percent) reported that IT delivers an exceptional experience. The majority of respondents (61 percent) classified the customer experience as adequate.

The remainder described it as inconsistent – the customer experience meets the expectations of the business some, but not all, of the time.

Lack of a single unified view of application health and customer experience (45 percent) was cited as the top obstacle to providing an exceptional customer experience, followed by inability to link end-user transaction issues to infrastructure, application and network components (35 percent), and difficulty prioritizing application issues and problems based on business impact (35 percent).

Improved customer satisfaction, loyalty and acquisition (65 percent) were chosen as the top benefits of Business Service Reliability, followed by faster delivery of new services (45 percent) and increased productivity (45 percent).

Increased communication with the lines of business to determine where the problems lie was the number one action item for improving Business Service Reliability, followed by investments in Infrastructure Management and Application Performance Management.

IT organizations need to better understand and optimize the customer's end-user experience with business services, rather than merely tracking metrics and thresholds for the various servers, storage, network and software components that support end-to-end service delivery. While components can be up 99.99 percent of the time, a customer may still have a bad experience with a service.

In today’s competitive market, customer experience has become one of the most critical ways to differentiate a business. By focusing on the 5 nines of customer experience rather than the 5 nines of availability, IT can help ensure every link in the service delivery chain is not only available, but is performing as intended and interacting appropriately to consistently deliver successful customer interactions.

By taking a formulaic approach to quantifying key success metrics, Business Service Reliability enables IT to concentrate on making sure every customer interaction is successful. IT organizations that succeed at managing the holistic service, rather than just the availability of supporting components, can deliver greater value to the business and spend less of their time clearing irrelevant alerts from their monitoring consoles.

ABOUT Tony Davis

Tony Davis is a 23 year veteran of the IT industry with a specialty in IT service reliability and strategy, currently serving as a Vice President of Solution Strategy & Sr. Consulting Fellow for CA Technologies North American Service Assurance business. Davis is the author of If My Availability is So Good, Why Do My Customers Feel So Bad?, a pragmatic guide to the 5 nines of customer experience.

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

Business Service Reliability: Correlating the Customer Experience and Business Outcomes

A recent poll on Business Service Reliability (BSR) found that IT needs to put an increased focus on managing and measuring the customer experience to improve business outcomes.

The poll — conducted by IDG Research Services on behalf of CA Technologies — sought to determine how organizations measure both BSR and the customer experience that IT provides. Business Service Reliability is a new approach to helping IT transform by providing a clearly-defined framework for managing and measuring customer interactions.

The majority of respondents (58 percent) are using a combination of surveys and other metrics (e.g. application downtime and call-center volume) to measure the customer experience.

Just over one quarter (26 percent) reported that IT delivers an exceptional experience. The majority of respondents (61 percent) classified the customer experience as adequate.

The remainder described it as inconsistent – the customer experience meets the expectations of the business some, but not all, of the time.

Lack of a single unified view of application health and customer experience (45 percent) was cited as the top obstacle to providing an exceptional customer experience, followed by inability to link end-user transaction issues to infrastructure, application and network components (35 percent), and difficulty prioritizing application issues and problems based on business impact (35 percent).

Improved customer satisfaction, loyalty and acquisition (65 percent) were chosen as the top benefits of Business Service Reliability, followed by faster delivery of new services (45 percent) and increased productivity (45 percent).

Increased communication with the lines of business to determine where the problems lie was the number one action item for improving Business Service Reliability, followed by investments in Infrastructure Management and Application Performance Management.

IT organizations need to better understand and optimize the customer's end-user experience with business services, rather than merely tracking metrics and thresholds for the various servers, storage, network and software components that support end-to-end service delivery. While components can be up 99.99 percent of the time, a customer may still have a bad experience with a service.

In today’s competitive market, customer experience has become one of the most critical ways to differentiate a business. By focusing on the 5 nines of customer experience rather than the 5 nines of availability, IT can help ensure every link in the service delivery chain is not only available, but is performing as intended and interacting appropriately to consistently deliver successful customer interactions.

By taking a formulaic approach to quantifying key success metrics, Business Service Reliability enables IT to concentrate on making sure every customer interaction is successful. IT organizations that succeed at managing the holistic service, rather than just the availability of supporting components, can deliver greater value to the business and spend less of their time clearing irrelevant alerts from their monitoring consoles.

ABOUT Tony Davis

Tony Davis is a 23 year veteran of the IT industry with a specialty in IT service reliability and strategy, currently serving as a Vice President of Solution Strategy & Sr. Consulting Fellow for CA Technologies North American Service Assurance business. Davis is the author of If My Availability is So Good, Why Do My Customers Feel So Bad?, a pragmatic guide to the 5 nines of customer experience.

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...